2025-06-02 12:33:48.682137 | Job console starting 2025-06-02 12:33:48.717942 | Updating git repos 2025-06-02 12:33:48.804204 | Cloning repos into workspace 2025-06-02 12:33:49.019736 | Restoring repo states 2025-06-02 12:33:49.039488 | Merging changes 2025-06-02 12:33:49.039519 | Checking out repos 2025-06-02 12:33:49.345907 | Preparing playbooks 2025-06-02 12:33:49.984453 | Running Ansible setup 2025-06-02 12:33:54.293199 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-02 12:33:55.054255 | 2025-06-02 12:33:55.054426 | PLAY [Base pre] 2025-06-02 12:33:55.071712 | 2025-06-02 12:33:55.071862 | TASK [Setup log path fact] 2025-06-02 12:33:55.091440 | orchestrator | ok 2025-06-02 12:33:55.109268 | 2025-06-02 12:33:55.109435 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 12:33:55.157667 | orchestrator | ok 2025-06-02 12:33:55.174458 | 2025-06-02 12:33:55.174601 | TASK [emit-job-header : Print job information] 2025-06-02 12:33:55.232550 | # Job Information 2025-06-02 12:33:55.232825 | Ansible Version: 2.16.14 2025-06-02 12:33:55.232883 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-02 12:33:55.232940 | Pipeline: post 2025-06-02 12:33:55.232980 | Executor: 521e9411259a 2025-06-02 12:33:55.233017 | Triggered by: https://github.com/osism/testbed/commit/5813b17ae086a94a05f5680616379ffb7585bf19 2025-06-02 12:33:55.233106 | Event ID: d0e18cf0-3fad-11f0-9fc6-74ef1215d406 2025-06-02 12:33:55.242357 | 2025-06-02 12:33:55.242494 | LOOP [emit-job-header : Print node information] 2025-06-02 12:33:55.393274 | orchestrator | ok: 2025-06-02 12:33:55.393494 | orchestrator | # Node Information 2025-06-02 12:33:55.393528 | orchestrator | Inventory Hostname: orchestrator 2025-06-02 12:33:55.393554 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-02 12:33:55.393575 | orchestrator | Username: zuul-testbed03 2025-06-02 12:33:55.393595 | orchestrator | Distro: Debian 12.11 2025-06-02 12:33:55.393620 | orchestrator | Provider: static-testbed 2025-06-02 12:33:55.393641 | orchestrator | Region: 2025-06-02 12:33:55.393662 | orchestrator | Label: testbed-orchestrator 2025-06-02 12:33:55.393682 | orchestrator | Product Name: OpenStack Nova 2025-06-02 12:33:55.393702 | orchestrator | Interface IP: 81.163.193.140 2025-06-02 12:33:55.422709 | 2025-06-02 12:33:55.422913 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-02 12:33:55.913073 | orchestrator -> localhost | changed 2025-06-02 12:33:55.921524 | 2025-06-02 12:33:55.921666 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-02 12:33:56.985362 | orchestrator -> localhost | changed 2025-06-02 12:33:56.999727 | 2025-06-02 12:33:56.999872 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-02 12:33:57.348005 | orchestrator -> localhost | ok 2025-06-02 12:33:57.355461 | 2025-06-02 12:33:57.355593 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-02 12:33:57.394768 | orchestrator | ok 2025-06-02 12:33:57.411481 | orchestrator | included: /var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-02 12:33:57.419672 | 2025-06-02 12:33:57.419784 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-02 12:33:58.469964 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-02 12:33:58.470510 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/work/03e6f846fb7344cdb17eecf8934c4468_id_rsa 2025-06-02 12:33:58.470629 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/work/03e6f846fb7344cdb17eecf8934c4468_id_rsa.pub 2025-06-02 12:33:58.470714 | orchestrator -> localhost | The key fingerprint is: 2025-06-02 12:33:58.470788 | orchestrator -> localhost | SHA256:kwScL+T+kbWwDw05QWWk2A9k8I/3HYYfOgIUDJb65Dw zuul-build-sshkey 2025-06-02 12:33:58.470929 | orchestrator -> localhost | The key's randomart image is: 2025-06-02 12:33:58.471023 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-02 12:33:58.471127 | orchestrator -> localhost | | .=B=o+ | 2025-06-02 12:33:58.471192 | orchestrator -> localhost | | .=Bo+ | 2025-06-02 12:33:58.471251 | orchestrator -> localhost | | +..Bo | 2025-06-02 12:33:58.471306 | orchestrator -> localhost | | . +o**. . | 2025-06-02 12:33:58.471363 | orchestrator -> localhost | | * .SO+.. + | 2025-06-02 12:33:58.471432 | orchestrator -> localhost | | E =+o. = o | 2025-06-02 12:33:58.471494 | orchestrator -> localhost | | o +. + o | 2025-06-02 12:33:58.471552 | orchestrator -> localhost | | . .. . | 2025-06-02 12:33:58.471612 | orchestrator -> localhost | | | 2025-06-02 12:33:58.471668 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-02 12:33:58.471810 | orchestrator -> localhost | ok: Runtime: 0:00:00.533403 2025-06-02 12:33:58.485670 | 2025-06-02 12:33:58.485806 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-02 12:33:58.516800 | orchestrator | ok 2025-06-02 12:33:58.527093 | orchestrator | included: /var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-02 12:33:58.536234 | 2025-06-02 12:33:58.536335 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-02 12:33:58.551707 | orchestrator | skipping: Conditional result was False 2025-06-02 12:33:58.559465 | 2025-06-02 12:33:58.559575 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-02 12:33:59.164436 | orchestrator | changed 2025-06-02 12:33:59.173675 | 2025-06-02 12:33:59.173824 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-02 12:33:59.474733 | orchestrator | ok 2025-06-02 12:33:59.483854 | 2025-06-02 12:33:59.483987 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-02 12:33:59.924547 | orchestrator | ok 2025-06-02 12:33:59.933556 | 2025-06-02 12:33:59.933694 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-02 12:34:00.359678 | orchestrator | ok 2025-06-02 12:34:00.365855 | 2025-06-02 12:34:00.365968 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-02 12:34:00.390010 | orchestrator | skipping: Conditional result was False 2025-06-02 12:34:00.405338 | 2025-06-02 12:34:00.405501 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-02 12:34:00.877182 | orchestrator -> localhost | changed 2025-06-02 12:34:00.891789 | 2025-06-02 12:34:00.891928 | TASK [add-build-sshkey : Add back temp key] 2025-06-02 12:34:01.259735 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/work/03e6f846fb7344cdb17eecf8934c4468_id_rsa (zuul-build-sshkey) 2025-06-02 12:34:01.259997 | orchestrator -> localhost | ok: Runtime: 0:00:00.020760 2025-06-02 12:34:01.267696 | 2025-06-02 12:34:01.267820 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-02 12:34:01.717355 | orchestrator | ok 2025-06-02 12:34:01.724585 | 2025-06-02 12:34:01.724712 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-02 12:34:01.748931 | orchestrator | skipping: Conditional result was False 2025-06-02 12:34:01.800433 | 2025-06-02 12:34:01.800572 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-02 12:34:02.233849 | orchestrator | ok 2025-06-02 12:34:02.249526 | 2025-06-02 12:34:02.249669 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-02 12:34:02.291921 | orchestrator | ok 2025-06-02 12:34:02.300437 | 2025-06-02 12:34:02.300554 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-02 12:34:02.569885 | orchestrator -> localhost | ok 2025-06-02 12:34:02.580965 | 2025-06-02 12:34:02.581151 | TASK [validate-host : Collect information about the host] 2025-06-02 12:34:03.851652 | orchestrator | ok 2025-06-02 12:34:03.867234 | 2025-06-02 12:34:03.867355 | TASK [validate-host : Sanitize hostname] 2025-06-02 12:34:03.926629 | orchestrator | ok 2025-06-02 12:34:03.932290 | 2025-06-02 12:34:03.932418 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-02 12:34:04.496023 | orchestrator -> localhost | changed 2025-06-02 12:34:04.502704 | 2025-06-02 12:34:04.502821 | TASK [validate-host : Collect information about zuul worker] 2025-06-02 12:34:04.930069 | orchestrator | ok 2025-06-02 12:34:04.935684 | 2025-06-02 12:34:04.935807 | TASK [validate-host : Write out all zuul information for each host] 2025-06-02 12:34:05.582622 | orchestrator -> localhost | changed 2025-06-02 12:34:05.606894 | 2025-06-02 12:34:05.607158 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-02 12:34:05.902183 | orchestrator | ok 2025-06-02 12:34:05.909958 | 2025-06-02 12:34:05.910107 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-02 12:34:44.032530 | orchestrator | changed: 2025-06-02 12:34:44.032843 | orchestrator | .d..t...... src/ 2025-06-02 12:34:44.032901 | orchestrator | .d..t...... src/github.com/ 2025-06-02 12:34:44.032943 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-02 12:34:44.032979 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-02 12:34:44.033013 | orchestrator | RedHat.yml 2025-06-02 12:34:44.046924 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-02 12:34:44.046945 | orchestrator | RedHat.yml 2025-06-02 12:34:44.047008 | orchestrator | = 1.53.0"... 2025-06-02 12:34:57.300623 | orchestrator | 12:34:57.300 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-02 12:34:57.385337 | orchestrator | 12:34:57.385 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-02 12:34:58.633977 | orchestrator | 12:34:58.633 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-02 12:34:59.694396 | orchestrator | 12:34:59.694 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-02 12:35:00.652028 | orchestrator | 12:35:00.651 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-02 12:35:01.501223 | orchestrator | 12:35:01.500 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 12:35:02.391469 | orchestrator | 12:35:02.391 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-02 12:35:03.235460 | orchestrator | 12:35:03.235 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 12:35:03.235548 | orchestrator | 12:35:03.235 STDOUT terraform: Providers are signed by their developers. 2025-06-02 12:35:03.235557 | orchestrator | 12:35:03.235 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-02 12:35:03.235562 | orchestrator | 12:35:03.235 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-02 12:35:03.235586 | orchestrator | 12:35:03.235 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-02 12:35:03.235659 | orchestrator | 12:35:03.235 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-02 12:35:03.235728 | orchestrator | 12:35:03.235 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-02 12:35:03.235850 | orchestrator | 12:35:03.235 STDOUT terraform: you run "tofu init" in the future. 2025-06-02 12:35:03.235857 | orchestrator | 12:35:03.235 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-02 12:35:03.235863 | orchestrator | 12:35:03.235 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-02 12:35:03.235933 | orchestrator | 12:35:03.235 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-02 12:35:03.235977 | orchestrator | 12:35:03.235 STDOUT terraform: should now work. 2025-06-02 12:35:03.235984 | orchestrator | 12:35:03.235 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-02 12:35:03.236041 | orchestrator | 12:35:03.235 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-02 12:35:03.236094 | orchestrator | 12:35:03.236 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-02 12:35:03.418290 | orchestrator | 12:35:03.418 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-02 12:35:03.619820 | orchestrator | 12:35:03.619 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-02 12:35:03.619884 | orchestrator | 12:35:03.619 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-02 12:35:03.619893 | orchestrator | 12:35:03.619 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-02 12:35:03.619897 | orchestrator | 12:35:03.619 STDOUT terraform: for this configuration. 2025-06-02 12:35:03.850558 | orchestrator | 12:35:03.850 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-02 12:35:03.952592 | orchestrator | 12:35:03.952 STDOUT terraform: ci.auto.tfvars 2025-06-02 12:35:03.963855 | orchestrator | 12:35:03.963 STDOUT terraform: default_custom.tf 2025-06-02 12:35:04.171174 | orchestrator | 12:35:04.170 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-06-02 12:35:05.166101 | orchestrator | 12:35:05.165 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-02 12:35:06.088613 | orchestrator | 12:35:06.088 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-02 12:35:06.334135 | orchestrator | 12:35:06.328 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-02 12:35:06.334236 | orchestrator | 12:35:06.328 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-02 12:35:06.334247 | orchestrator | 12:35:06.328 STDOUT terraform:  + create 2025-06-02 12:35:06.334255 | orchestrator | 12:35:06.328 STDOUT terraform:  <= read (data resources) 2025-06-02 12:35:06.334263 | orchestrator | 12:35:06.328 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-02 12:35:06.334272 | orchestrator | 12:35:06.328 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-02 12:35:06.334279 | orchestrator | 12:35:06.328 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 12:35:06.334286 | orchestrator | 12:35:06.328 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-02 12:35:06.334292 | orchestrator | 12:35:06.328 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 12:35:06.334299 | orchestrator | 12:35:06.329 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 12:35:06.334305 | orchestrator | 12:35:06.329 STDOUT terraform:  + file = (known after apply) 2025-06-02 12:35:06.334312 | orchestrator | 12:35:06.329 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.334319 | orchestrator | 12:35:06.329 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.334326 | orchestrator | 12:35:06.329 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 12:35:06.334332 | orchestrator | 12:35:06.329 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 12:35:06.334339 | orchestrator | 12:35:06.329 STDOUT terraform:  + most_recent = true 2025-06-02 12:35:06.334365 | orchestrator | 12:35:06.329 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.334372 | orchestrator | 12:35:06.329 STDOUT terraform:  + protected = (known after apply) 2025-06-02 12:35:06.334379 | orchestrator | 12:35:06.329 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.334386 | orchestrator | 12:35:06.329 STDOUT terraform:  + schema = (known after apply) 2025-06-02 12:35:06.334392 | orchestrator | 12:35:06.329 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 12:35:06.334399 | orchestrator | 12:35:06.329 STDOUT terraform:  + tags = (known after apply) 2025-06-02 12:35:06.334406 | orchestrator | 12:35:06.329 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 12:35:06.334413 | orchestrator | 12:35:06.329 STDOUT terraform:  } 2025-06-02 12:35:06.334420 | orchestrator | 12:35:06.329 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-02 12:35:06.334426 | orchestrator | 12:35:06.329 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 12:35:06.334436 | orchestrator | 12:35:06.329 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-02 12:35:06.334443 | orchestrator | 12:35:06.329 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 12:35:06.334449 | orchestrator | 12:35:06.330 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 12:35:06.334456 | orchestrator | 12:35:06.330 STDOUT terraform:  + file = (known after apply) 2025-06-02 12:35:06.334463 | orchestrator | 12:35:06.330 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.334470 | orchestrator | 12:35:06.330 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.334476 | orchestrator | 12:35:06.330 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 12:35:06.334483 | orchestrator | 12:35:06.330 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 12:35:06.334489 | orchestrator | 12:35:06.330 STDOUT terraform:  + most_recent = true 2025-06-02 12:35:06.334496 | orchestrator | 12:35:06.330 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.334503 | orchestrator | 12:35:06.330 STDOUT terraform:  + protected = (known after apply) 2025-06-02 12:35:06.334509 | orchestrator | 12:35:06.330 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.334532 | orchestrator | 12:35:06.330 STDOUT terraform:  + schema = (known after apply) 2025-06-02 12:35:06.334544 | orchestrator | 12:35:06.330 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 12:35:06.334551 | orchestrator | 12:35:06.330 STDOUT terraform:  + tags = (known after apply) 2025-06-02 12:35:06.334558 | orchestrator | 12:35:06.330 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 12:35:06.334565 | orchestrator | 12:35:06.330 STDOUT terraform:  } 2025-06-02 12:35:06.334571 | orchestrator | 12:35:06.330 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-02 12:35:06.334578 | orchestrator | 12:35:06.330 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-02 12:35:06.334585 | orchestrator | 12:35:06.330 STDOUT terraform:  + content = (known after apply) 2025-06-02 12:35:06.334592 | orchestrator | 12:35:06.330 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:06.334604 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:06.334610 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:06.334617 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:06.334624 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:06.334631 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:06.334637 | orchestrator | 12:35:06.331 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 12:35:06.334645 | orchestrator | 12:35:06.331 STDOUT terraform:  + file_permission = "0644" 2025-06-02 12:35:06.334651 | orchestrator | 12:35:06.331 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-02 12:35:06.334658 | orchestrator | 12:35:06.331 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.334665 | orchestrator | 12:35:06.331 STDOUT terraform:  } 2025-06-02 12:35:06.334672 | orchestrator | 12:35:06.331 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-02 12:35:06.334678 | orchestrator | 12:35:06.331 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-02 12:35:06.334685 | orchestrator | 12:35:06.331 STDOUT terraform:  + content = (known after apply) 2025-06-02 12:35:06.334692 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:06.334698 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:06.334705 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:06.334712 | orchestrator | 12:35:06.331 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:06.334719 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:06.334725 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:06.334732 | orchestrator | 12:35:06.332 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 12:35:06.334739 | orchestrator | 12:35:06.332 STDOUT terraform:  + file_permission = "0644" 2025-06-02 12:35:06.334745 | orchestrator | 12:35:06.332 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-02 12:35:06.334752 | orchestrator | 12:35:06.332 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.334759 | orchestrator | 12:35:06.332 STDOUT terraform:  } 2025-06-02 12:35:06.334766 | orchestrator | 12:35:06.332 STDOUT terraform:  # local_file.inventory will be created 2025-06-02 12:35:06.334772 | orchestrator | 12:35:06.332 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-02 12:35:06.334779 | orchestrator | 12:35:06.332 STDOUT terraform:  + content = (known after apply) 2025-06-02 12:35:06.334786 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:06.334792 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:06.334807 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:06.334818 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:06.334825 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:06.334831 | orchestrator | 12:35:06.332 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:06.334838 | orchestrator | 12:35:06.332 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 12:35:06.334845 | orchestrator | 12:35:06.332 STDOUT terraform:  + file_permission = "0644" 2025-06-02 12:35:06.334851 | orchestrator | 12:35:06.332 STDOUT terraform:  + filename = "inventory.ci" 2025-06-02 12:35:06.334858 | orchestrator | 12:35:06.332 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.334865 | orchestrator | 12:35:06.332 STDOUT terraform:  } 2025-06-02 12:35:06.334871 | orchestrator | 12:35:06.333 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-02 12:35:06.334878 | orchestrator | 12:35:06.333 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-02 12:35:06.334885 | orchestrator | 12:35:06.333 STDOUT terraform:  + content = (sensitive value) 2025-06-02 12:35:06.334892 | orchestrator | 12:35:06.333 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 12:35:06.334899 | orchestrator | 12:35:06.333 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 12:35:06.334905 | orchestrator | 12:35:06.333 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 12:35:06.334912 | orchestrator | 12:35:06.333 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 12:35:06.334919 | orchestrator | 12:35:06.333 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 12:35:06.334926 | orchestrator | 12:35:06.333 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 12:35:06.334932 | orchestrator | 12:35:06.333 STDOUT terraform:  + directory_permission = "0700" 2025-06-02 12:35:06.334939 | orchestrator | 12:35:06.333 STDOUT terraform:  + file_permission = "0600" 2025-06-02 12:35:06.334946 | orchestrator | 12:35:06.333 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-02 12:35:06.334953 | orchestrator | 12:35:06.333 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.334959 | orchestrator | 12:35:06.333 STDOUT terraform:  } 2025-06-02 12:35:06.334966 | orchestrator | 12:35:06.333 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-02 12:35:06.334973 | orchestrator | 12:35:06.333 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-02 12:35:06.334979 | orchestrator | 12:35:06.333 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.334986 | orchestrator | 12:35:06.333 STDOUT terraform:  } 2025-06-02 12:35:06.334993 | orchestrator | 12:35:06.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-02 12:35:06.335034 | orchestrator | 12:35:06.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-02 12:35:06.335149 | orchestrator | 12:35:06.335 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.335299 | orchestrator | 12:35:06.335 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.335398 | orchestrator | 12:35:06.335 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.335540 | orchestrator | 12:35:06.335 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.335643 | orchestrator | 12:35:06.335 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.335769 | orchestrator | 12:35:06.335 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-02 12:35:06.335868 | orchestrator | 12:35:06.335 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.335938 | orchestrator | 12:35:06.335 STDOUT terraform:  + size = 80 2025-06-02 12:35:06.336013 | orchestrator | 12:35:06.335 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.336088 | orchestrator | 12:35:06.336 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.336136 | orchestrator | 12:35:06.336 STDOUT terraform:  } 2025-06-02 12:35:06.336289 | orchestrator | 12:35:06.336 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-02 12:35:06.336417 | orchestrator | 12:35:06.336 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:06.336512 | orchestrator | 12:35:06.336 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.336588 | orchestrator | 12:35:06.336 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.336680 | orchestrator | 12:35:06.336 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.336773 | orchestrator | 12:35:06.336 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.336887 | orchestrator | 12:35:06.336 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.337007 | orchestrator | 12:35:06.336 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-02 12:35:06.337099 | orchestrator | 12:35:06.337 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.337169 | orchestrator | 12:35:06.337 STDOUT terraform:  + size = 80 2025-06-02 12:35:06.337256 | orchestrator | 12:35:06.337 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.337330 | orchestrator | 12:35:06.337 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.337367 | orchestrator | 12:35:06.337 STDOUT terraform:  } 2025-06-02 12:35:06.337496 | orchestrator | 12:35:06.337 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-02 12:35:06.337604 | orchestrator | 12:35:06.337 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:06.337726 | orchestrator | 12:35:06.337 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.337830 | orchestrator | 12:35:06.337 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.337929 | orchestrator | 12:35:06.337 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.338026 | orchestrator | 12:35:06.337 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.338129 | orchestrator | 12:35:06.338 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.338325 | orchestrator | 12:35:06.338 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-02 12:35:06.338432 | orchestrator | 12:35:06.338 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.338486 | orchestrator | 12:35:06.338 STDOUT terraform:  + size = 80 2025-06-02 12:35:06.338553 | orchestrator | 12:35:06.338 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.338620 | orchestrator | 12:35:06.338 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.338654 | orchestrator | 12:35:06.338 STDOUT terraform:  } 2025-06-02 12:35:06.338761 | orchestrator | 12:35:06.338 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-02 12:35:06.338874 | orchestrator | 12:35:06.338 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:06.338964 | orchestrator | 12:35:06.338 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.339029 | orchestrator | 12:35:06.338 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.339122 | orchestrator | 12:35:06.339 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.339219 | orchestrator | 12:35:06.339 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.339302 | orchestrator | 12:35:06.339 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.339388 | orchestrator | 12:35:06.339 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-02 12:35:06.339471 | orchestrator | 12:35:06.339 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.339512 | orchestrator | 12:35:06.339 STDOUT terraform:  + size = 80 2025-06-02 12:35:06.339556 | orchestrator | 12:35:06.339 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.339601 | orchestrator | 12:35:06.339 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.339651 | orchestrator | 12:35:06.339 STDOUT terraform:  } 2025-06-02 12:35:06.339731 | orchestrator | 12:35:06.339 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-02 12:35:06.339807 | orchestrator | 12:35:06.339 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:06.339868 | orchestrator | 12:35:06.339 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.339913 | orchestrator | 12:35:06.339 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.339986 | orchestrator | 12:35:06.339 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.340071 | orchestrator | 12:35:06.340 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.340142 | orchestrator | 12:35:06.340 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.340233 | orchestrator | 12:35:06.340 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-02 12:35:06.340306 | orchestrator | 12:35:06.340 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.340388 | orchestrator | 12:35:06.340 STDOUT terraform:  + size = 80 2025-06-02 12:35:06.340444 | orchestrator | 12:35:06.340 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.340490 | orchestrator | 12:35:06.340 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.340520 | orchestrator | 12:35:06.340 STDOUT terraform:  } 2025-06-02 12:35:06.340607 | orchestrator | 12:35:06.340 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-02 12:35:06.340684 | orchestrator | 12:35:06.340 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:06.340766 | orchestrator | 12:35:06.340 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.340811 | orchestrator | 12:35:06.340 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.340874 | orchestrator | 12:35:06.340 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.340947 | orchestrator | 12:35:06.340 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.341011 | orchestrator | 12:35:06.340 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.341089 | orchestrator | 12:35:06.341 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-02 12:35:06.341154 | orchestrator | 12:35:06.341 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.341213 | orchestrator | 12:35:06.341 STDOUT terraform:  + size = 80 2025-06-02 12:35:06.341276 | orchestrator | 12:35:06.341 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.341324 | orchestrator | 12:35:06.341 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.341355 | orchestrator | 12:35:06.341 STDOUT terraform:  } 2025-06-02 12:35:06.341431 | orchestrator | 12:35:06.341 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-02 12:35:06.341517 | orchestrator | 12:35:06.341 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 12:35:06.341585 | orchestrator | 12:35:06.341 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.341632 | orchestrator | 12:35:06.341 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.341693 | orchestrator | 12:35:06.341 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.341756 | orchestrator | 12:35:06.341 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.341828 | orchestrator | 12:35:06.341 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.341904 | orchestrator | 12:35:06.341 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-02 12:35:06.341969 | orchestrator | 12:35:06.341 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.342028 | orchestrator | 12:35:06.341 STDOUT terraform:  + size = 80 2025-06-02 12:35:06.342086 | orchestrator | 12:35:06.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.342141 | orchestrator | 12:35:06.342 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.342178 | orchestrator | 12:35:06.342 STDOUT terraform:  } 2025-06-02 12:35:06.342304 | orchestrator | 12:35:06.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-02 12:35:06.342392 | orchestrator | 12:35:06.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.342459 | orchestrator | 12:35:06.342 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.342506 | orchestrator | 12:35:06.342 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.342567 | orchestrator | 12:35:06.342 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.342631 | orchestrator | 12:35:06.342 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.342698 | orchestrator | 12:35:06.342 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-02 12:35:06.342758 | orchestrator | 12:35:06.342 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.342796 | orchestrator | 12:35:06.342 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.342836 | orchestrator | 12:35:06.342 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.342877 | orchestrator | 12:35:06.342 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.342904 | orchestrator | 12:35:06.342 STDOUT terraform:  } 2025-06-02 12:35:06.342972 | orchestrator | 12:35:06.342 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-02 12:35:06.343041 | orchestrator | 12:35:06.342 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.343098 | orchestrator | 12:35:06.343 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.343139 | orchestrator | 12:35:06.343 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.343210 | orchestrator | 12:35:06.343 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.343267 | orchestrator | 12:35:06.343 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.343331 | orchestrator | 12:35:06.343 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-02 12:35:06.343386 | orchestrator | 12:35:06.343 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.343425 | orchestrator | 12:35:06.343 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.343466 | orchestrator | 12:35:06.343 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.343508 | orchestrator | 12:35:06.343 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.343535 | orchestrator | 12:35:06.343 STDOUT terraform:  } 2025-06-02 12:35:06.343604 | orchestrator | 12:35:06.343 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-02 12:35:06.343668 | orchestrator | 12:35:06.343 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.343724 | orchestrator | 12:35:06.343 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.343764 | orchestrator | 12:35:06.343 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.343820 | orchestrator | 12:35:06.343 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.343880 | orchestrator | 12:35:06.343 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.343938 | orchestrator | 12:35:06.343 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-02 12:35:06.344001 | orchestrator | 12:35:06.343 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.344042 | orchestrator | 12:35:06.344 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.344083 | orchestrator | 12:35:06.344 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.344123 | orchestrator | 12:35:06.344 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.344150 | orchestrator | 12:35:06.344 STDOUT terraform:  } 2025-06-02 12:35:06.344241 | orchestrator | 12:35:06.344 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-02 12:35:06.344308 | orchestrator | 12:35:06.344 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.344388 | orchestrator | 12:35:06.344 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.344431 | orchestrator | 12:35:06.344 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.344492 | orchestrator | 12:35:06.344 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.344549 | orchestrator | 12:35:06.344 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.344610 | orchestrator | 12:35:06.344 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-02 12:35:06.344725 | orchestrator | 12:35:06.344 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.344768 | orchestrator | 12:35:06.344 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.344816 | orchestrator | 12:35:06.344 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.344857 | orchestrator | 12:35:06.344 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.344885 | orchestrator | 12:35:06.344 STDOUT terraform:  } 2025-06-02 12:35:06.344952 | orchestrator | 12:35:06.344 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-02 12:35:06.345017 | orchestrator | 12:35:06.344 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.345071 | orchestrator | 12:35:06.345 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.345110 | orchestrator | 12:35:06.345 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.345165 | orchestrator | 12:35:06.345 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.345235 | orchestrator | 12:35:06.345 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.345295 | orchestrator | 12:35:06.345 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-02 12:35:06.345349 | orchestrator | 12:35:06.345 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.345384 | orchestrator | 12:35:06.345 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.345424 | orchestrator | 12:35:06.345 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.345464 | orchestrator | 12:35:06.345 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.345497 | orchestrator | 12:35:06.345 STDOUT terraform:  } 2025-06-02 12:35:06.345561 | orchestrator | 12:35:06.345 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-02 12:35:06.345628 | orchestrator | 12:35:06.345 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.345682 | orchestrator | 12:35:06.345 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.345722 | orchestrator | 12:35:06.345 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.345776 | orchestrator | 12:35:06.345 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.345829 | orchestrator | 12:35:06.345 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.345887 | orchestrator | 12:35:06.345 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-02 12:35:06.345942 | orchestrator | 12:35:06.345 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.345977 | orchestrator | 12:35:06.345 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.346033 | orchestrator | 12:35:06.345 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.346075 | orchestrator | 12:35:06.346 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.346101 | orchestrator | 12:35:06.346 STDOUT terraform:  } 2025-06-02 12:35:06.346168 | orchestrator | 12:35:06.346 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-02 12:35:06.346263 | orchestrator | 12:35:06.346 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.346322 | orchestrator | 12:35:06.346 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.346361 | orchestrator | 12:35:06.346 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.346414 | orchestrator | 12:35:06.346 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.346465 | orchestrator | 12:35:06.346 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.346519 | orchestrator | 12:35:06.346 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-02 12:35:06.346572 | orchestrator | 12:35:06.346 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.346608 | orchestrator | 12:35:06.346 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.346650 | orchestrator | 12:35:06.346 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.346691 | orchestrator | 12:35:06.346 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.346717 | orchestrator | 12:35:06.346 STDOUT terraform:  } 2025-06-02 12:35:06.346779 | orchestrator | 12:35:06.346 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-02 12:35:06.346840 | orchestrator | 12:35:06.346 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.346891 | orchestrator | 12:35:06.346 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.346930 | orchestrator | 12:35:06.346 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.346983 | orchestrator | 12:35:06.346 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.347040 | orchestrator | 12:35:06.346 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.347096 | orchestrator | 12:35:06.347 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-02 12:35:06.347147 | orchestrator | 12:35:06.347 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.347180 | orchestrator | 12:35:06.347 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.347234 | orchestrator | 12:35:06.347 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.347273 | orchestrator | 12:35:06.347 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.347299 | orchestrator | 12:35:06.347 STDOUT terraform:  } 2025-06-02 12:35:06.347360 | orchestrator | 12:35:06.347 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-02 12:35:06.347420 | orchestrator | 12:35:06.347 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 12:35:06.347474 | orchestrator | 12:35:06.347 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 12:35:06.347512 | orchestrator | 12:35:06.347 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.347565 | orchestrator | 12:35:06.347 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.347622 | orchestrator | 12:35:06.347 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 12:35:06.347678 | orchestrator | 12:35:06.347 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-02 12:35:06.347729 | orchestrator | 12:35:06.347 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.347764 | orchestrator | 12:35:06.347 STDOUT terraform:  + size = 20 2025-06-02 12:35:06.347802 | orchestrator | 12:35:06.347 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 12:35:06.347840 | orchestrator | 12:35:06.347 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 12:35:06.347865 | orchestrator | 12:35:06.347 STDOUT terraform:  } 2025-06-02 12:35:06.348023 | orchestrator | 12:35:06.347 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-02 12:35:06.348087 | orchestrator | 12:35:06.348 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-02 12:35:06.348140 | orchestrator | 12:35:06.348 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:06.348222 | orchestrator | 12:35:06.348 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:06.348279 | orchestrator | 12:35:06.348 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:06.348328 | orchestrator | 12:35:06.348 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.348363 | orchestrator | 12:35:06.348 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.348394 | orchestrator | 12:35:06.348 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:06.348440 | orchestrator | 12:35:06.348 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:06.348487 | orchestrator | 12:35:06.348 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:06.348533 | orchestrator | 12:35:06.348 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-02 12:35:06.348567 | orchestrator | 12:35:06.348 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:06.348612 | orchestrator | 12:35:06.348 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:06.348658 | orchestrator | 12:35:06.348 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.348709 | orchestrator | 12:35:06.348 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.348757 | orchestrator | 12:35:06.348 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:06.348792 | orchestrator | 12:35:06.348 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:06.348834 | orchestrator | 12:35:06.348 STDOUT terraform:  + name = "testbed-manager" 2025-06-02 12:35:06.348869 | orchestrator | 12:35:06.348 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:06.348916 | orchestrator | 12:35:06.348 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.348960 | orchestrator | 12:35:06.348 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:06.348994 | orchestrator | 12:35:06.348 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:06.349042 | orchestrator | 12:35:06.349 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:06.349088 | orchestrator | 12:35:06.349 STDOUT terraform:  + user_data = (known after apply) 2025-06-02 12:35:06.349115 | orchestrator | 12:35:06.349 STDOUT terraform:  + block_device { 2025-06-02 12:35:06.349151 | orchestrator | 12:35:06.349 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:06.349189 | orchestrator | 12:35:06.349 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:06.349244 | orchestrator | 12:35:06.349 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:06.349283 | orchestrator | 12:35:06.349 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:06.349323 | orchestrator | 12:35:06.349 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:06.349374 | orchestrator | 12:35:06.349 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.349399 | orchestrator | 12:35:06.349 STDOUT terraform:  } 2025-06-02 12:35:06.349423 | orchestrator | 12:35:06.349 STDOUT terraform:  + network { 2025-06-02 12:35:06.349455 | orchestrator | 12:35:06.349 STDOUT terraform:  + access_network = false 2025-06-02 12:35:06.349498 | orchestrator | 12:35:06.349 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:06.349541 | orchestrator | 12:35:06.349 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:06.349584 | orchestrator | 12:35:06.349 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:06.349626 | orchestrator | 12:35:06.349 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.349669 | orchestrator | 12:35:06.349 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:06.349711 | orchestrator | 12:35:06.349 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.349740 | orchestrator | 12:35:06.349 STDOUT terraform:  } 2025-06-02 12:35:06.349764 | orchestrator | 12:35:06.349 STDOUT terraform:  } 2025-06-02 12:35:06.349820 | orchestrator | 12:35:06.349 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-02 12:35:06.349893 | orchestrator | 12:35:06.349 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:06.349940 | orchestrator | 12:35:06.349 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:06.349985 | orchestrator | 12:35:06.349 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:06.350045 | orchestrator | 12:35:06.349 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:06.350092 | orchestrator | 12:35:06.350 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.350127 | orchestrator | 12:35:06.350 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.350159 | orchestrator | 12:35:06.350 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:06.350220 | orchestrator | 12:35:06.350 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:06.350268 | orchestrator | 12:35:06.350 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:06.350308 | orchestrator | 12:35:06.350 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:06.350343 | orchestrator | 12:35:06.350 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:06.350388 | orchestrator | 12:35:06.350 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:06.350436 | orchestrator | 12:35:06.350 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.350482 | orchestrator | 12:35:06.350 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.350530 | orchestrator | 12:35:06.350 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:06.350567 | orchestrator | 12:35:06.350 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:06.350609 | orchestrator | 12:35:06.350 STDOUT terraform:  + name = "testbed-node-0" 2025-06-02 12:35:06.350645 | orchestrator | 12:35:06.350 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:06.350696 | orchestrator | 12:35:06.350 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.350744 | orchestrator | 12:35:06.350 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:06.350778 | orchestrator | 12:35:06.350 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:06.350824 | orchestrator | 12:35:06.350 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:06.350886 | orchestrator | 12:35:06.350 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:06.350914 | orchestrator | 12:35:06.350 STDOUT terraform:  + block_device { 2025-06-02 12:35:06.350949 | orchestrator | 12:35:06.350 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:06.350989 | orchestrator | 12:35:06.350 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:06.351029 | orchestrator | 12:35:06.350 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:06.351072 | orchestrator | 12:35:06.351 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:06.351113 | orchestrator | 12:35:06.351 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:06.351163 | orchestrator | 12:35:06.351 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.351187 | orchestrator | 12:35:06.351 STDOUT terraform:  } 2025-06-02 12:35:06.351242 | orchestrator | 12:35:06.351 STDOUT terraform:  + network { 2025-06-02 12:35:06.351275 | orchestrator | 12:35:06.351 STDOUT terraform:  + access_network = false 2025-06-02 12:35:06.351318 | orchestrator | 12:35:06.351 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:06.351359 | orchestrator | 12:35:06.351 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:06.351401 | orchestrator | 12:35:06.351 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:06.351443 | orchestrator | 12:35:06.351 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.351487 | orchestrator | 12:35:06.351 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:06.351530 | orchestrator | 12:35:06.351 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.351554 | orchestrator | 12:35:06.351 STDOUT terraform:  } 2025-06-02 12:35:06.351577 | orchestrator | 12:35:06.351 STDOUT terraform:  } 2025-06-02 12:35:06.351632 | orchestrator | 12:35:06.351 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-02 12:35:06.351681 | orchestrator | 12:35:06.351 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:06.351728 | orchestrator | 12:35:06.351 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:06.351771 | orchestrator | 12:35:06.351 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:06.351813 | orchestrator | 12:35:06.351 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:06.351855 | orchestrator | 12:35:06.351 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.351886 | orchestrator | 12:35:06.351 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.351915 | orchestrator | 12:35:06.351 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:06.351957 | orchestrator | 12:35:06.351 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:06.351999 | orchestrator | 12:35:06.351 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:06.352035 | orchestrator | 12:35:06.352 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:06.352066 | orchestrator | 12:35:06.352 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:06.352107 | orchestrator | 12:35:06.352 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:06.352151 | orchestrator | 12:35:06.352 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.352206 | orchestrator | 12:35:06.352 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.352250 | orchestrator | 12:35:06.352 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:06.352287 | orchestrator | 12:35:06.352 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:06.352327 | orchestrator | 12:35:06.352 STDOUT terraform:  + name = "testbed-node-1" 2025-06-02 12:35:06.352359 | orchestrator | 12:35:06.352 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:06.352402 | orchestrator | 12:35:06.352 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.352444 | orchestrator | 12:35:06.352 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:06.352474 | orchestrator | 12:35:06.352 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:06.352517 | orchestrator | 12:35:06.352 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:06.352575 | orchestrator | 12:35:06.352 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:06.352602 | orchestrator | 12:35:06.352 STDOUT terraform:  + block_device { 2025-06-02 12:35:06.352633 | orchestrator | 12:35:06.352 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:06.352668 | orchestrator | 12:35:06.352 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:06.352706 | orchestrator | 12:35:06.352 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:06.352742 | orchestrator | 12:35:06.352 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:06.352781 | orchestrator | 12:35:06.352 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:06.352828 | orchestrator | 12:35:06.352 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.352851 | orchestrator | 12:35:06.352 STDOUT terraform:  } 2025-06-02 12:35:06.352875 | orchestrator | 12:35:06.352 STDOUT terraform:  + network { 2025-06-02 12:35:06.352905 | orchestrator | 12:35:06.352 STDOUT terraform:  + access_network = false 2025-06-02 12:35:06.352943 | orchestrator | 12:35:06.352 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:06.352981 | orchestrator | 12:35:06.352 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:06.353019 | orchestrator | 12:35:06.352 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:06.353059 | orchestrator | 12:35:06.353 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.353097 | orchestrator | 12:35:06.353 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:06.353136 | orchestrator | 12:35:06.353 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.353157 | orchestrator | 12:35:06.353 STDOUT terraform:  } 2025-06-02 12:35:06.353178 | orchestrator | 12:35:06.353 STDOUT terraform:  } 2025-06-02 12:35:06.353239 | orchestrator | 12:35:06.353 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-02 12:35:06.353289 | orchestrator | 12:35:06.353 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:06.353332 | orchestrator | 12:35:06.353 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:06.353374 | orchestrator | 12:35:06.353 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:06.353423 | orchestrator | 12:35:06.353 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:06.353466 | orchestrator | 12:35:06.353 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.353497 | orchestrator | 12:35:06.353 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.353526 | orchestrator | 12:35:06.353 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:06.353570 | orchestrator | 12:35:06.353 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:06.353613 | orchestrator | 12:35:06.353 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:06.353652 | orchestrator | 12:35:06.353 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:06.353683 | orchestrator | 12:35:06.353 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:06.353725 | orchestrator | 12:35:06.353 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:06.353768 | orchestrator | 12:35:06.353 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.353812 | orchestrator | 12:35:06.353 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.353854 | orchestrator | 12:35:06.353 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:06.353888 | orchestrator | 12:35:06.353 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:06.353926 | orchestrator | 12:35:06.353 STDOUT terraform:  + name = "testbed-node-2" 2025-06-02 12:35:06.353958 | orchestrator | 12:35:06.353 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:06.354024 | orchestrator | 12:35:06.353 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.354070 | orchestrator | 12:35:06.354 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:06.354101 | orchestrator | 12:35:06.354 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:06.354143 | orchestrator | 12:35:06.354 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:06.354210 | orchestrator | 12:35:06.354 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:06.354235 | orchestrator | 12:35:06.354 STDOUT terraform:  + block_device { 2025-06-02 12:35:06.354268 | orchestrator | 12:35:06.354 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:06.354304 | orchestrator | 12:35:06.354 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:06.354341 | orchestrator | 12:35:06.354 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:06.354377 | orchestrator | 12:35:06.354 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:06.354414 | orchestrator | 12:35:06.354 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:06.354461 | orchestrator | 12:35:06.354 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.354483 | orchestrator | 12:35:06.354 STDOUT terraform:  } 2025-06-02 12:35:06.354505 | orchestrator | 12:35:06.354 STDOUT terraform:  + network { 2025-06-02 12:35:06.354534 | orchestrator | 12:35:06.354 STDOUT terraform:  + access_network = false 2025-06-02 12:35:06.354577 | orchestrator | 12:35:06.354 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:06.354616 | orchestrator | 12:35:06.354 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:06.354654 | orchestrator | 12:35:06.354 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:06.354694 | orchestrator | 12:35:06.354 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.354733 | orchestrator | 12:35:06.354 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:06.354772 | orchestrator | 12:35:06.354 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.354794 | orchestrator | 12:35:06.354 STDOUT terraform:  } 2025-06-02 12:35:06.354815 | orchestrator | 12:35:06.354 STDOUT terraform:  } 2025-06-02 12:35:06.354865 | orchestrator | 12:35:06.354 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-02 12:35:06.354916 | orchestrator | 12:35:06.354 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:06.354960 | orchestrator | 12:35:06.354 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:06.355004 | orchestrator | 12:35:06.354 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:06.355050 | orchestrator | 12:35:06.355 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:06.355093 | orchestrator | 12:35:06.355 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.355124 | orchestrator | 12:35:06.355 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.355152 | orchestrator | 12:35:06.355 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:06.355223 | orchestrator | 12:35:06.355 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:06.355268 | orchestrator | 12:35:06.355 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:06.355306 | orchestrator | 12:35:06.355 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:06.355337 | orchestrator | 12:35:06.355 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:06.355379 | orchestrator | 12:35:06.355 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:06.355423 | orchestrator | 12:35:06.355 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.355465 | orchestrator | 12:35:06.355 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.355565 | orchestrator | 12:35:06.355 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:06.355601 | orchestrator | 12:35:06.355 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:06.355642 | orchestrator | 12:35:06.355 STDOUT terraform:  + name = "testbed-node-3" 2025-06-02 12:35:06.355675 | orchestrator | 12:35:06.355 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:06.355719 | orchestrator | 12:35:06.355 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.355763 | orchestrator | 12:35:06.355 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:06.355794 | orchestrator | 12:35:06.355 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:06.355846 | orchestrator | 12:35:06.355 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:06.355904 | orchestrator | 12:35:06.355 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:06.355929 | orchestrator | 12:35:06.355 STDOUT terraform:  + block_device { 2025-06-02 12:35:06.355961 | orchestrator | 12:35:06.355 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:06.355997 | orchestrator | 12:35:06.355 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:06.356034 | orchestrator | 12:35:06.356 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:06.356069 | orchestrator | 12:35:06.356 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:06.356107 | orchestrator | 12:35:06.356 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:06.356155 | orchestrator | 12:35:06.356 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.356176 | orchestrator | 12:35:06.356 STDOUT terraform:  } 2025-06-02 12:35:06.356212 | orchestrator | 12:35:06.356 STDOUT terraform:  + network { 2025-06-02 12:35:06.356242 | orchestrator | 12:35:06.356 STDOUT terraform:  + access_network = false 2025-06-02 12:35:06.356281 | orchestrator | 12:35:06.356 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:06.356319 | orchestrator | 12:35:06.356 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:06.356358 | orchestrator | 12:35:06.356 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:06.356397 | orchestrator | 12:35:06.356 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.356436 | orchestrator | 12:35:06.356 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:06.356474 | orchestrator | 12:35:06.356 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.356496 | orchestrator | 12:35:06.356 STDOUT terraform:  } 2025-06-02 12:35:06.356518 | orchestrator | 12:35:06.356 STDOUT terraform:  } 2025-06-02 12:35:06.356567 | orchestrator | 12:35:06.356 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-02 12:35:06.356616 | orchestrator | 12:35:06.356 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:06.356660 | orchestrator | 12:35:06.356 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:06.356703 | orchestrator | 12:35:06.356 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:06.356746 | orchestrator | 12:35:06.356 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:06.356790 | orchestrator | 12:35:06.356 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.356823 | orchestrator | 12:35:06.356 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.356853 | orchestrator | 12:35:06.356 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:06.356896 | orchestrator | 12:35:06.356 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:06.356937 | orchestrator | 12:35:06.356 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:06.356978 | orchestrator | 12:35:06.356 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:06.357009 | orchestrator | 12:35:06.356 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:06.357052 | orchestrator | 12:35:06.357 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:06.357095 | orchestrator | 12:35:06.357 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.357138 | orchestrator | 12:35:06.357 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.357181 | orchestrator | 12:35:06.357 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:06.357229 | orchestrator | 12:35:06.357 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:06.357268 | orchestrator | 12:35:06.357 STDOUT terraform:  + name = "testbed-node-4" 2025-06-02 12:35:06.357300 | orchestrator | 12:35:06.357 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:06.357343 | orchestrator | 12:35:06.357 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.357385 | orchestrator | 12:35:06.357 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:06.357416 | orchestrator | 12:35:06.357 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:06.357458 | orchestrator | 12:35:06.357 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:06.357514 | orchestrator | 12:35:06.357 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:06.357539 | orchestrator | 12:35:06.357 STDOUT terraform:  + block_device { 2025-06-02 12:35:06.357572 | orchestrator | 12:35:06.357 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:06.357607 | orchestrator | 12:35:06.357 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:06.357643 | orchestrator | 12:35:06.357 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:06.357682 | orchestrator | 12:35:06.357 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:06.357719 | orchestrator | 12:35:06.357 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:06.357765 | orchestrator | 12:35:06.357 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.357786 | orchestrator | 12:35:06.357 STDOUT terraform:  } 2025-06-02 12:35:06.357809 | orchestrator | 12:35:06.357 STDOUT terraform:  + network { 2025-06-02 12:35:06.357837 | orchestrator | 12:35:06.357 STDOUT terraform:  + access_network = false 2025-06-02 12:35:06.357875 | orchestrator | 12:35:06.357 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:06.357912 | orchestrator | 12:35:06.357 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:06.357951 | orchestrator | 12:35:06.357 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:06.357992 | orchestrator | 12:35:06.357 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.358048 | orchestrator | 12:35:06.358 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:06.358106 | orchestrator | 12:35:06.358 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.358134 | orchestrator | 12:35:06.358 STDOUT terraform:  } 2025-06-02 12:35:06.358156 | orchestrator | 12:35:06.358 STDOUT terraform:  } 2025-06-02 12:35:06.358240 | orchestrator | 12:35:06.358 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-02 12:35:06.358296 | orchestrator | 12:35:06.358 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 12:35:06.358340 | orchestrator | 12:35:06.358 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 12:35:06.358384 | orchestrator | 12:35:06.358 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 12:35:06.358427 | orchestrator | 12:35:06.358 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 12:35:06.358470 | orchestrator | 12:35:06.358 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.358501 | orchestrator | 12:35:06.358 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 12:35:06.358531 | orchestrator | 12:35:06.358 STDOUT terraform:  + config_drive = true 2025-06-02 12:35:06.358574 | orchestrator | 12:35:06.358 STDOUT terraform:  + created = (known after apply) 2025-06-02 12:35:06.358618 | orchestrator | 12:35:06.358 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 12:35:06.358654 | orchestrator | 12:35:06.358 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 12:35:06.358686 | orchestrator | 12:35:06.358 STDOUT terraform:  + force_delete = false 2025-06-02 12:35:06.358728 | orchestrator | 12:35:06.358 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 12:35:06.358772 | orchestrator | 12:35:06.358 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.358817 | orchestrator | 12:35:06.358 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 12:35:06.358860 | orchestrator | 12:35:06.358 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 12:35:06.358896 | orchestrator | 12:35:06.358 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 12:35:06.358934 | orchestrator | 12:35:06.358 STDOUT terraform:  + name = "testbed-node-5" 2025-06-02 12:35:06.358965 | orchestrator | 12:35:06.358 STDOUT terraform:  + power_state = "active" 2025-06-02 12:35:06.359008 | orchestrator | 12:35:06.358 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.359052 | orchestrator | 12:35:06.359 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 12:35:06.359082 | orchestrator | 12:35:06.359 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 12:35:06.359125 | orchestrator | 12:35:06.359 STDOUT terraform:  + updated = (known after apply) 2025-06-02 12:35:06.359182 | orchestrator | 12:35:06.359 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 12:35:06.359218 | orchestrator | 12:35:06.359 STDOUT terraform:  + block_device { 2025-06-02 12:35:06.359251 | orchestrator | 12:35:06.359 STDOUT terraform:  + boot_index = 0 2025-06-02 12:35:06.359286 | orchestrator | 12:35:06.359 STDOUT terraform:  + delete_on_termination = false 2025-06-02 12:35:06.359323 | orchestrator | 12:35:06.359 STDOUT terraform:  + destination_type = "volume" 2025-06-02 12:35:06.359363 | orchestrator | 12:35:06.359 STDOUT terraform:  + multiattach = false 2025-06-02 12:35:06.359401 | orchestrator | 12:35:06.359 STDOUT terraform:  + source_type = "volume" 2025-06-02 12:35:06.359451 | orchestrator | 12:35:06.359 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.359473 | orchestrator | 12:35:06.359 STDOUT terraform:  } 2025-06-02 12:35:06.359495 | orchestrator | 12:35:06.359 STDOUT terraform:  + network { 2025-06-02 12:35:06.359525 | orchestrator | 12:35:06.359 STDOUT terraform:  + access_network = false 2025-06-02 12:35:06.359563 | orchestrator | 12:35:06.359 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 12:35:06.359602 | orchestrator | 12:35:06.359 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 12:35:06.359641 | orchestrator | 12:35:06.359 STDOUT terraform:  + mac = (known after apply) 2025-06-02 12:35:06.359681 | orchestrator | 12:35:06.359 STDOUT terraform:  + name = (known after apply) 2025-06-02 12:35:06.359720 | orchestrator | 12:35:06.359 STDOUT terraform:  + port = (known after apply) 2025-06-02 12:35:06.359758 | orchestrator | 12:35:06.359 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 12:35:06.359780 | orchestrator | 12:35:06.359 STDOUT terraform:  } 2025-06-02 12:35:06.359802 | orchestrator | 12:35:06.359 STDOUT terraform:  } 2025-06-02 12:35:06.359843 | orchestrator | 12:35:06.359 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-02 12:35:06.359885 | orchestrator | 12:35:06.359 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-02 12:35:06.359969 | orchestrator | 12:35:06.359 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-02 12:35:06.360011 | orchestrator | 12:35:06.359 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.360041 | orchestrator | 12:35:06.360 STDOUT terraform:  + name = "testbed" 2025-06-02 12:35:06.360074 | orchestrator | 12:35:06.360 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 12:35:06.360111 | orchestrator | 12:35:06.360 STDOUT terraform:  + public_key = (known after apply) 2025-06-02 12:35:06.360147 | orchestrator | 12:35:06.360 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.360208 | orchestrator | 12:35:06.360 STDOUT terraform:  + user_id = (known after apply) 2025-06-02 12:35:06.360231 | orchestrator | 12:35:06.360 STDOUT terraform:  } 2025-06-02 12:35:06.360289 | orchestrator | 12:35:06.360 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-02 12:35:06.360347 | orchestrator | 12:35:06.360 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.360384 | orchestrator | 12:35:06.360 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.360420 | orchestrator | 12:35:06.360 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.360456 | orchestrator | 12:35:06.360 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.360492 | orchestrator | 12:35:06.360 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.360530 | orchestrator | 12:35:06.360 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.360556 | orchestrator | 12:35:06.360 STDOUT terraform:  } 2025-06-02 12:35:06.360613 | orchestrator | 12:35:06.360 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-02 12:35:06.360674 | orchestrator | 12:35:06.360 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.360711 | orchestrator | 12:35:06.360 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.360749 | orchestrator | 12:35:06.360 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.360786 | orchestrator | 12:35:06.360 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.360822 | orchestrator | 12:35:06.360 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.360858 | orchestrator | 12:35:06.360 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.360878 | orchestrator | 12:35:06.360 STDOUT terraform:  } 2025-06-02 12:35:06.360935 | orchestrator | 12:35:06.360 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-02 12:35:06.360991 | orchestrator | 12:35:06.360 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.361028 | orchestrator | 12:35:06.360 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.361064 | orchestrator | 12:35:06.361 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.361101 | orchestrator | 12:35:06.361 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.361137 | orchestrator | 12:35:06.361 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.361174 | orchestrator | 12:35:06.361 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.361205 | orchestrator | 12:35:06.361 STDOUT terraform:  } 2025-06-02 12:35:06.361262 | orchestrator | 12:35:06.361 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-02 12:35:06.361318 | orchestrator | 12:35:06.361 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.361354 | orchestrator | 12:35:06.361 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.361391 | orchestrator | 12:35:06.361 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.361427 | orchestrator | 12:35:06.361 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.361463 | orchestrator | 12:35:06.361 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.361498 | orchestrator | 12:35:06.361 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.361520 | orchestrator | 12:35:06.361 STDOUT terraform:  } 2025-06-02 12:35:06.361577 | orchestrator | 12:35:06.361 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-02 12:35:06.361635 | orchestrator | 12:35:06.361 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.361671 | orchestrator | 12:35:06.361 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.361709 | orchestrator | 12:35:06.361 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.361749 | orchestrator | 12:35:06.361 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.361784 | orchestrator | 12:35:06.361 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.361822 | orchestrator | 12:35:06.361 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.361843 | orchestrator | 12:35:06.361 STDOUT terraform:  } 2025-06-02 12:35:06.361901 | orchestrator | 12:35:06.361 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-02 12:35:06.361956 | orchestrator | 12:35:06.361 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.361993 | orchestrator | 12:35:06.361 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.362055 | orchestrator | 12:35:06.362 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.362093 | orchestrator | 12:35:06.362 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.362128 | orchestrator | 12:35:06.362 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.362164 | orchestrator | 12:35:06.362 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.362184 | orchestrator | 12:35:06.362 STDOUT terraform:  } 2025-06-02 12:35:06.362271 | orchestrator | 12:35:06.362 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-02 12:35:06.362332 | orchestrator | 12:35:06.362 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.362371 | orchestrator | 12:35:06.362 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.362407 | orchestrator | 12:35:06.362 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.362443 | orchestrator | 12:35:06.362 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.362479 | orchestrator | 12:35:06.362 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.362515 | orchestrator | 12:35:06.362 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.362537 | orchestrator | 12:35:06.362 STDOUT terraform:  } 2025-06-02 12:35:06.362596 | orchestrator | 12:35:06.362 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-02 12:35:06.362652 | orchestrator | 12:35:06.362 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.362689 | orchestrator | 12:35:06.362 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.362725 | orchestrator | 12:35:06.362 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.362760 | orchestrator | 12:35:06.362 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.362797 | orchestrator | 12:35:06.362 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.362834 | orchestrator | 12:35:06.362 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.362855 | orchestrator | 12:35:06.362 STDOUT terraform:  } 2025-06-02 12:35:06.362913 | orchestrator | 12:35:06.362 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-02 12:35:06.362994 | orchestrator | 12:35:06.362 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 12:35:06.363054 | orchestrator | 12:35:06.363 STDOUT terraform:  + device = (known after apply) 2025-06-02 12:35:06.363098 | orchestrator | 12:35:06.363 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.363134 | orchestrator | 12:35:06.363 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 12:35:06.363170 | orchestrator | 12:35:06.363 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.363222 | orchestrator | 12:35:06.363 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 12:35:06.363244 | orchestrator | 12:35:06.363 STDOUT terraform:  } 2025-06-02 12:35:06.363326 | orchestrator | 12:35:06.363 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-02 12:35:06.363395 | orchestrator | 12:35:06.363 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-02 12:35:06.363431 | orchestrator | 12:35:06.363 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 12:35:06.363472 | orchestrator | 12:35:06.363 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-02 12:35:06.363530 | orchestrator | 12:35:06.363 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.363570 | orchestrator | 12:35:06.363 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 12:35:06.363609 | orchestrator | 12:35:06.363 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.363631 | orchestrator | 12:35:06.363 STDOUT terraform:  } 2025-06-02 12:35:06.363686 | orchestrator | 12:35:06.363 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-02 12:35:06.363742 | orchestrator | 12:35:06.363 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-02 12:35:06.363775 | orchestrator | 12:35:06.363 STDOUT terraform:  + address = (known after apply) 2025-06-02 12:35:06.363808 | orchestrator | 12:35:06.363 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.363842 | orchestrator | 12:35:06.363 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 12:35:06.363875 | orchestrator | 12:35:06.363 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.363907 | orchestrator | 12:35:06.363 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 12:35:06.363940 | orchestrator | 12:35:06.363 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.363969 | orchestrator | 12:35:06.363 STDOUT terraform:  + pool = "public" 2025-06-02 12:35:06.364002 | orchestrator | 12:35:06.363 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 12:35:06.364036 | orchestrator | 12:35:06.364 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.364073 | orchestrator | 12:35:06.364 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.364106 | orchestrator | 12:35:06.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.364129 | orchestrator | 12:35:06.364 STDOUT terraform:  } 2025-06-02 12:35:06.364180 | orchestrator | 12:35:06.364 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-02 12:35:06.364278 | orchestrator | 12:35:06.364 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-02 12:35:06.364325 | orchestrator | 12:35:06.364 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.364370 | orchestrator | 12:35:06.364 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.364402 | orchestrator | 12:35:06.364 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 12:35:06.364425 | orchestrator | 12:35:06.364 STDOUT terraform:  + "nova", 2025-06-02 12:35:06.364447 | orchestrator | 12:35:06.364 STDOUT terraform:  ] 2025-06-02 12:35:06.364493 | orchestrator | 12:35:06.364 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 12:35:06.364538 | orchestrator | 12:35:06.364 STDOUT terraform:  + external = (known after apply) 2025-06-02 12:35:06.364583 | orchestrator | 12:35:06.364 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.364630 | orchestrator | 12:35:06.364 STDOUT terraform:  + mtu = (known after apply) 2025-06-02 12:35:06.364678 | orchestrator | 12:35:06.364 STDOUT terraform:  + name = "net-testbed-management" 2025-06-02 12:35:06.364722 | orchestrator | 12:35:06.364 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.364768 | orchestrator | 12:35:06.364 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.364813 | orchestrator | 12:35:06.364 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.364858 | orchestrator | 12:35:06.364 STDOUT terraform:  + shared = (known after apply) 2025-06-02 12:35:06.364903 | orchestrator | 12:35:06.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.364947 | orchestrator | 12:35:06.364 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-02 12:35:06.364982 | orchestrator | 12:35:06.364 STDOUT terraform:  + segments (known after apply) 2025-06-02 12:35:06.365005 | orchestrator | 12:35:06.364 STDOUT terraform:  } 2025-06-02 12:35:06.365061 | orchestrator | 12:35:06.365 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-02 12:35:06.365116 | orchestrator | 12:35:06.365 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-02 12:35:06.365161 | orchestrator | 12:35:06.365 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.365219 | orchestrator | 12:35:06.365 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:06.365265 | orchestrator | 12:35:06.365 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:06.365308 | orchestrator | 12:35:06.365 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.365354 | orchestrator | 12:35:06.365 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:06.365399 | orchestrator | 12:35:06.365 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:06.365443 | orchestrator | 12:35:06.365 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:06.365487 | orchestrator | 12:35:06.365 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.365537 | orchestrator | 12:35:06.365 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.365580 | orchestrator | 12:35:06.365 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:06.365623 | orchestrator | 12:35:06.365 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.365665 | orchestrator | 12:35:06.365 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.365709 | orchestrator | 12:35:06.365 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.365753 | orchestrator | 12:35:06.365 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.365796 | orchestrator | 12:35:06.365 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:06.365841 | orchestrator | 12:35:06.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.365869 | orchestrator | 12:35:06.365 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.365906 | orchestrator | 12:35:06.365 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:06.365927 | orchestrator | 12:35:06.365 STDOUT terraform:  } 2025-06-02 12:35:06.365956 | orchestrator | 12:35:06.365 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.365991 | orchestrator | 12:35:06.365 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:06.366035 | orchestrator | 12:35:06.365 STDOUT terraform:  } 2025-06-02 12:35:06.366069 | orchestrator | 12:35:06.366 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:06.366091 | orchestrator | 12:35:06.366 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:06.366123 | orchestrator | 12:35:06.366 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-02 12:35:06.366161 | orchestrator | 12:35:06.366 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.366182 | orchestrator | 12:35:06.366 STDOUT terraform:  } 2025-06-02 12:35:06.366218 | orchestrator | 12:35:06.366 STDOUT terraform:  } 2025-06-02 12:35:06.366273 | orchestrator | 12:35:06.366 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-02 12:35:06.366326 | orchestrator | 12:35:06.366 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:06.366370 | orchestrator | 12:35:06.366 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.366413 | orchestrator | 12:35:06.366 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:06.366464 | orchestrator | 12:35:06.366 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:06.366508 | orchestrator | 12:35:06.366 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.366552 | orchestrator | 12:35:06.366 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:06.366596 | orchestrator | 12:35:06.366 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:06.366638 | orchestrator | 12:35:06.366 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:06.366687 | orchestrator | 12:35:06.366 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.366736 | orchestrator | 12:35:06.366 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.366780 | orchestrator | 12:35:06.366 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:06.366823 | orchestrator | 12:35:06.366 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.366865 | orchestrator | 12:35:06.366 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.366908 | orchestrator | 12:35:06.366 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.366952 | orchestrator | 12:35:06.366 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.366995 | orchestrator | 12:35:06.366 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:06.367039 | orchestrator | 12:35:06.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.367066 | orchestrator | 12:35:06.367 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.367102 | orchestrator | 12:35:06.367 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:06.367124 | orchestrator | 12:35:06.367 STDOUT terraform:  } 2025-06-02 12:35:06.367151 | orchestrator | 12:35:06.367 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.367189 | orchestrator | 12:35:06.367 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:06.367241 | orchestrator | 12:35:06.367 STDOUT terraform:  } 2025-06-02 12:35:06.367269 | orchestrator | 12:35:06.367 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.367305 | orchestrator | 12:35:06.367 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:06.367326 | orchestrator | 12:35:06.367 STDOUT terraform:  } 2025-06-02 12:35:06.367353 | orchestrator | 12:35:06.367 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.367391 | orchestrator | 12:35:06.367 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:06.367412 | orchestrator | 12:35:06.367 STDOUT terraform:  } 2025-06-02 12:35:06.367444 | orchestrator | 12:35:06.367 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:06.367466 | orchestrator | 12:35:06.367 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:06.367500 | orchestrator | 12:35:06.367 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-02 12:35:06.367536 | orchestrator | 12:35:06.367 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.367558 | orchestrator | 12:35:06.367 STDOUT terraform:  } 2025-06-02 12:35:06.367578 | orchestrator | 12:35:06.367 STDOUT terraform:  } 2025-06-02 12:35:06.367632 | orchestrator | 12:35:06.367 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-02 12:35:06.367683 | orchestrator | 12:35:06.367 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:06.367726 | orchestrator | 12:35:06.367 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.367771 | orchestrator | 12:35:06.367 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:06.367819 | orchestrator | 12:35:06.367 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:06.367867 | orchestrator | 12:35:06.367 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.367912 | orchestrator | 12:35:06.367 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:06.367955 | orchestrator | 12:35:06.367 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:06.368000 | orchestrator | 12:35:06.367 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:06.368045 | orchestrator | 12:35:06.368 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.368089 | orchestrator | 12:35:06.368 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.368133 | orchestrator | 12:35:06.368 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:06.368176 | orchestrator | 12:35:06.368 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.368244 | orchestrator | 12:35:06.368 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.368289 | orchestrator | 12:35:06.368 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.368334 | orchestrator | 12:35:06.368 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.368377 | orchestrator | 12:35:06.368 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:06.368420 | orchestrator | 12:35:06.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.368458 | orchestrator | 12:35:06.368 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.368496 | orchestrator | 12:35:06.368 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:06.368518 | orchestrator | 12:35:06.368 STDOUT terraform:  } 2025-06-02 12:35:06.368546 | orchestrator | 12:35:06.368 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.368584 | orchestrator | 12:35:06.368 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:06.368606 | orchestrator | 12:35:06.368 STDOUT terraform:  } 2025-06-02 12:35:06.368648 | orchestrator | 12:35:06.368 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.368688 | orchestrator | 12:35:06.368 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:06.368710 | orchestrator | 12:35:06.368 STDOUT terraform:  } 2025-06-02 12:35:06.368736 | orchestrator | 12:35:06.368 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.368772 | orchestrator | 12:35:06.368 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:06.368793 | orchestrator | 12:35:06.368 STDOUT terraform:  } 2025-06-02 12:35:06.368824 | orchestrator | 12:35:06.368 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:06.368845 | orchestrator | 12:35:06.368 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:06.368877 | orchestrator | 12:35:06.368 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-02 12:35:06.368914 | orchestrator | 12:35:06.368 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.368935 | orchestrator | 12:35:06.368 STDOUT terraform:  } 2025-06-02 12:35:06.368961 | orchestrator | 12:35:06.368 STDOUT terraform:  } 2025-06-02 12:35:06.369016 | orchestrator | 12:35:06.368 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-02 12:35:06.369068 | orchestrator | 12:35:06.369 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:06.369111 | orchestrator | 12:35:06.369 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.369154 | orchestrator | 12:35:06.369 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:06.369227 | orchestrator | 12:35:06.369 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:06.369274 | orchestrator | 12:35:06.369 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.369317 | orchestrator | 12:35:06.369 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:06.369362 | orchestrator | 12:35:06.369 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:06.369408 | orchestrator | 12:35:06.369 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:06.369453 | orchestrator | 12:35:06.369 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.369501 | orchestrator | 12:35:06.369 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.369547 | orchestrator | 12:35:06.369 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:06.369591 | orchestrator | 12:35:06.369 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.369636 | orchestrator | 12:35:06.369 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.369682 | orchestrator | 12:35:06.369 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.369727 | orchestrator | 12:35:06.369 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.369769 | orchestrator | 12:35:06.369 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:06.369813 | orchestrator | 12:35:06.369 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.369841 | orchestrator | 12:35:06.369 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.369877 | orchestrator | 12:35:06.369 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:06.369899 | orchestrator | 12:35:06.369 STDOUT terraform:  } 2025-06-02 12:35:06.369927 | orchestrator | 12:35:06.369 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.369965 | orchestrator | 12:35:06.369 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:06.370005 | orchestrator | 12:35:06.369 STDOUT terraform:  } 2025-06-02 12:35:06.370060 | orchestrator | 12:35:06.370 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.370098 | orchestrator | 12:35:06.370 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:06.370120 | orchestrator | 12:35:06.370 STDOUT terraform:  } 2025-06-02 12:35:06.370148 | orchestrator | 12:35:06.370 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.370184 | orchestrator | 12:35:06.370 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:06.370226 | orchestrator | 12:35:06.370 STDOUT terraform:  } 2025-06-02 12:35:06.370258 | orchestrator | 12:35:06.370 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:06.370280 | orchestrator | 12:35:06.370 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:06.370312 | orchestrator | 12:35:06.370 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-02 12:35:06.370349 | orchestrator | 12:35:06.370 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.370370 | orchestrator | 12:35:06.370 STDOUT terraform:  } 2025-06-02 12:35:06.370391 | orchestrator | 12:35:06.370 STDOUT terraform:  } 2025-06-02 12:35:06.370445 | orchestrator | 12:35:06.370 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-02 12:35:06.370497 | orchestrator | 12:35:06.370 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:06.370541 | orchestrator | 12:35:06.370 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.370586 | orchestrator | 12:35:06.370 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:06.370629 | orchestrator | 12:35:06.370 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:06.370673 | orchestrator | 12:35:06.370 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.370718 | orchestrator | 12:35:06.370 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:06.370761 | orchestrator | 12:35:06.370 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:06.370804 | orchestrator | 12:35:06.370 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:06.370847 | orchestrator | 12:35:06.370 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.370891 | orchestrator | 12:35:06.370 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.370935 | orchestrator | 12:35:06.370 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:06.370979 | orchestrator | 12:35:06.370 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.371022 | orchestrator | 12:35:06.370 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.371065 | orchestrator | 12:35:06.371 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.371109 | orchestrator | 12:35:06.371 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.371152 | orchestrator | 12:35:06.371 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:06.371228 | orchestrator | 12:35:06.371 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.371259 | orchestrator | 12:35:06.371 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.371296 | orchestrator | 12:35:06.371 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:06.371317 | orchestrator | 12:35:06.371 STDOUT terraform:  } 2025-06-02 12:35:06.371345 | orchestrator | 12:35:06.371 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.371382 | orchestrator | 12:35:06.371 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:06.371409 | orchestrator | 12:35:06.371 STDOUT terraform:  } 2025-06-02 12:35:06.371448 | orchestrator | 12:35:06.371 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.371489 | orchestrator | 12:35:06.371 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:06.371511 | orchestrator | 12:35:06.371 STDOUT terraform:  } 2025-06-02 12:35:06.371539 | orchestrator | 12:35:06.371 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.371575 | orchestrator | 12:35:06.371 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:06.371596 | orchestrator | 12:35:06.371 STDOUT terraform:  } 2025-06-02 12:35:06.371629 | orchestrator | 12:35:06.371 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:06.371651 | orchestrator | 12:35:06.371 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:06.371683 | orchestrator | 12:35:06.371 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-02 12:35:06.371720 | orchestrator | 12:35:06.371 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.371743 | orchestrator | 12:35:06.371 STDOUT terraform:  } 2025-06-02 12:35:06.371763 | orchestrator | 12:35:06.371 STDOUT terraform:  } 2025-06-02 12:35:06.371816 | orchestrator | 12:35:06.371 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-02 12:35:06.371868 | orchestrator | 12:35:06.371 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:06.371912 | orchestrator | 12:35:06.371 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.371957 | orchestrator | 12:35:06.371 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:06.372000 | orchestrator | 12:35:06.371 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:06.372050 | orchestrator | 12:35:06.372 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.372094 | orchestrator | 12:35:06.372 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:06.372137 | orchestrator | 12:35:06.372 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:06.372180 | orchestrator | 12:35:06.372 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:06.372239 | orchestrator | 12:35:06.372 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.372284 | orchestrator | 12:35:06.372 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.372328 | orchestrator | 12:35:06.372 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:06.372371 | orchestrator | 12:35:06.372 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.372413 | orchestrator | 12:35:06.372 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.372456 | orchestrator | 12:35:06.372 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.372500 | orchestrator | 12:35:06.372 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.372546 | orchestrator | 12:35:06.372 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:06.372594 | orchestrator | 12:35:06.372 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.372622 | orchestrator | 12:35:06.372 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.372658 | orchestrator | 12:35:06.372 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:06.372680 | orchestrator | 12:35:06.372 STDOUT terraform:  } 2025-06-02 12:35:06.372706 | orchestrator | 12:35:06.372 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.372743 | orchestrator | 12:35:06.372 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:06.372764 | orchestrator | 12:35:06.372 STDOUT terraform:  } 2025-06-02 12:35:06.372791 | orchestrator | 12:35:06.372 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.372827 | orchestrator | 12:35:06.372 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:06.372850 | orchestrator | 12:35:06.372 STDOUT terraform:  } 2025-06-02 12:35:06.372878 | orchestrator | 12:35:06.372 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.372915 | orchestrator | 12:35:06.372 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:06.372937 | orchestrator | 12:35:06.372 STDOUT terraform:  } 2025-06-02 12:35:06.372968 | orchestrator | 12:35:06.372 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:06.372991 | orchestrator | 12:35:06.372 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:06.373025 | orchestrator | 12:35:06.372 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-02 12:35:06.373062 | orchestrator | 12:35:06.373 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.373083 | orchestrator | 12:35:06.373 STDOUT terraform:  } 2025-06-02 12:35:06.373104 | orchestrator | 12:35:06.373 STDOUT terraform:  } 2025-06-02 12:35:06.373159 | orchestrator | 12:35:06.373 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-02 12:35:06.373222 | orchestrator | 12:35:06.373 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 12:35:06.373267 | orchestrator | 12:35:06.373 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.373311 | orchestrator | 12:35:06.373 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 12:35:06.373353 | orchestrator | 12:35:06.373 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 12:35:06.373399 | orchestrator | 12:35:06.373 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.373442 | orchestrator | 12:35:06.373 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 12:35:06.373485 | orchestrator | 12:35:06.373 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 12:35:06.373528 | orchestrator | 12:35:06.373 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 12:35:06.373571 | orchestrator | 12:35:06.373 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 12:35:06.373615 | orchestrator | 12:35:06.373 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.373658 | orchestrator | 12:35:06.373 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 12:35:06.373708 | orchestrator | 12:35:06.373 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.373752 | orchestrator | 12:35:06.373 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 12:35:06.373796 | orchestrator | 12:35:06.373 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 12:35:06.373840 | orchestrator | 12:35:06.373 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.373882 | orchestrator | 12:35:06.373 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 12:35:06.373926 | orchestrator | 12:35:06.373 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.373957 | orchestrator | 12:35:06.373 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.373994 | orchestrator | 12:35:06.373 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 12:35:06.374043 | orchestrator | 12:35:06.374 STDOUT terraform:  } 2025-06-02 12:35:06.374073 | orchestrator | 12:35:06.374 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.374111 | orchestrator | 12:35:06.374 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 12:35:06.374133 | orchestrator | 12:35:06.374 STDOUT terraform:  } 2025-06-02 12:35:06.374160 | orchestrator | 12:35:06.374 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.374220 | orchestrator | 12:35:06.374 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 12:35:06.374244 | orchestrator | 12:35:06.374 STDOUT terraform:  } 2025-06-02 12:35:06.374272 | orchestrator | 12:35:06.374 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 12:35:06.374309 | orchestrator | 12:35:06.374 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 12:35:06.374332 | orchestrator | 12:35:06.374 STDOUT terraform:  } 2025-06-02 12:35:06.374363 | orchestrator | 12:35:06.374 STDOUT terraform:  + binding (known after apply) 2025-06-02 12:35:06.374386 | orchestrator | 12:35:06.374 STDOUT terraform:  + fixed_ip { 2025-06-02 12:35:06.374420 | orchestrator | 12:35:06.374 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-02 12:35:06.374458 | orchestrator | 12:35:06.374 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.374479 | orchestrator | 12:35:06.374 STDOUT terraform:  } 2025-06-02 12:35:06.374500 | orchestrator | 12:35:06.374 STDOUT terraform:  } 2025-06-02 12:35:06.374555 | orchestrator | 12:35:06.374 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-02 12:35:06.374611 | orchestrator | 12:35:06.374 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-02 12:35:06.374639 | orchestrator | 12:35:06.374 STDOUT terraform:  + force_destroy = false 2025-06-02 12:35:06.374678 | orchestrator | 12:35:06.374 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.374715 | orchestrator | 12:35:06.374 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 12:35:06.374751 | orchestrator | 12:35:06.374 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.374787 | orchestrator | 12:35:06.374 STDOUT terraform:  + router_id = (known after apply) 2025-06-02 12:35:06.374830 | orchestrator | 12:35:06.374 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 12:35:06.374851 | orchestrator | 12:35:06.374 STDOUT terraform:  } 2025-06-02 12:35:06.374895 | orchestrator | 12:35:06.374 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-02 12:35:06.374939 | orchestrator | 12:35:06.374 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-02 12:35:06.374984 | orchestrator | 12:35:06.374 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 12:35:06.375028 | orchestrator | 12:35:06.374 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.375059 | orchestrator | 12:35:06.375 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 12:35:06.375081 | orchestrator | 12:35:06.375 STDOUT terraform:  + "nova", 2025-06-02 12:35:06.375102 | orchestrator | 12:35:06.375 STDOUT terraform:  ] 2025-06-02 12:35:06.375148 | orchestrator | 12:35:06.375 STDOUT terraform:  + distributed = (known after apply) 2025-06-02 12:35:06.375204 | orchestrator | 12:35:06.375 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-02 12:35:06.375263 | orchestrator | 12:35:06.375 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-02 12:35:06.375309 | orchestrator | 12:35:06.375 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.375347 | orchestrator | 12:35:06.375 STDOUT terraform:  + name = "testbed" 2025-06-02 12:35:06.375392 | orchestrator | 12:35:06.375 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.375436 | orchestrator | 12:35:06.375 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.375473 | orchestrator | 12:35:06.375 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-02 12:35:06.375495 | orchestrator | 12:35:06.375 STDOUT terraform:  } 2025-06-02 12:35:06.375556 | orchestrator | 12:35:06.375 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-02 12:35:06.375618 | orchestrator | 12:35:06.375 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-02 12:35:06.375647 | orchestrator | 12:35:06.375 STDOUT terraform:  + description = "ssh" 2025-06-02 12:35:06.375679 | orchestrator | 12:35:06.375 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.375708 | orchestrator | 12:35:06.375 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.375749 | orchestrator | 12:35:06.375 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.375777 | orchestrator | 12:35:06.375 STDOUT terraform:  + port_range_max = 22 2025-06-02 12:35:06.375805 | orchestrator | 12:35:06.375 STDOUT terraform:  + port_range_min = 22 2025-06-02 12:35:06.375834 | orchestrator | 12:35:06.375 STDOUT terraform:  + protocol = "tcp" 2025-06-02 12:35:06.375876 | orchestrator | 12:35:06.375 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.375915 | orchestrator | 12:35:06.375 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.375948 | orchestrator | 12:35:06.375 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:06.375990 | orchestrator | 12:35:06.375 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.376029 | orchestrator | 12:35:06.375 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.376053 | orchestrator | 12:35:06.376 STDOUT terraform:  } 2025-06-02 12:35:06.376115 | orchestrator | 12:35:06.376 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-02 12:35:06.376177 | orchestrator | 12:35:06.376 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-02 12:35:06.376223 | orchestrator | 12:35:06.376 STDOUT terraform:  + description = "wireguard" 2025-06-02 12:35:06.376256 | orchestrator | 12:35:06.376 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.376285 | orchestrator | 12:35:06.376 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.376324 | orchestrator | 12:35:06.376 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.376352 | orchestrator | 12:35:06.376 STDOUT terraform:  + port_range_max = 51820 2025-06-02 12:35:06.376380 | orchestrator | 12:35:06.376 STDOUT terraform:  + port_range_min = 51820 2025-06-02 12:35:06.376408 | orchestrator | 12:35:06.376 STDOUT terraform:  + protocol = "udp" 2025-06-02 12:35:06.376448 | orchestrator | 12:35:06.376 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.376487 | orchestrator | 12:35:06.376 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.376519 | orchestrator | 12:35:06.376 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:06.376556 | orchestrator | 12:35:06.376 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.376594 | orchestrator | 12:35:06.376 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.376620 | orchestrator | 12:35:06.376 STDOUT terraform:  } 2025-06-02 12:35:06.376682 | orchestrator | 12:35:06.376 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-02 12:35:06.376742 | orchestrator | 12:35:06.376 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-02 12:35:06.376773 | orchestrator | 12:35:06.376 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.376802 | orchestrator | 12:35:06.376 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.376842 | orchestrator | 12:35:06.376 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.376871 | orchestrator | 12:35:06.376 STDOUT terraform:  + protocol = "tcp" 2025-06-02 12:35:06.376909 | orchestrator | 12:35:06.376 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.376946 | orchestrator | 12:35:06.376 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.376985 | orchestrator | 12:35:06.376 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 12:35:06.377023 | orchestrator | 12:35:06.376 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.377063 | orchestrator | 12:35:06.377 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.377089 | orchestrator | 12:35:06.377 STDOUT terraform:  } 2025-06-02 12:35:06.377151 | orchestrator | 12:35:06.377 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-02 12:35:06.377243 | orchestrator | 12:35:06.377 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-02 12:35:06.377282 | orchestrator | 12:35:06.377 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.377311 | orchestrator | 12:35:06.377 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.377349 | orchestrator | 12:35:06.377 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.377376 | orchestrator | 12:35:06.377 STDOUT terraform:  + protocol = "udp" 2025-06-02 12:35:06.377415 | orchestrator | 12:35:06.377 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.377452 | orchestrator | 12:35:06.377 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.377489 | orchestrator | 12:35:06.377 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 12:35:06.377526 | orchestrator | 12:35:06.377 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.377566 | orchestrator | 12:35:06.377 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.377587 | orchestrator | 12:35:06.377 STDOUT terraform:  } 2025-06-02 12:35:06.377646 | orchestrator | 12:35:06.377 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-02 12:35:06.377709 | orchestrator | 12:35:06.377 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-02 12:35:06.377741 | orchestrator | 12:35:06.377 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.377769 | orchestrator | 12:35:06.377 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.377808 | orchestrator | 12:35:06.377 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.377831 | orchestrator | 12:35:06.377 STDOUT terraform:  + prot 2025-06-02 12:35:06.377888 | orchestrator | 12:35:06.377 STDOUT terraform: ocol = "icmp" 2025-06-02 12:35:06.377927 | orchestrator | 12:35:06.377 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.377964 | orchestrator | 12:35:06.377 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.377997 | orchestrator | 12:35:06.377 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:06.378052 | orchestrator | 12:35:06.378 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.378091 | orchestrator | 12:35:06.378 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.378112 | orchestrator | 12:35:06.378 STDOUT terraform:  } 2025-06-02 12:35:06.378169 | orchestrator | 12:35:06.378 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-02 12:35:06.378265 | orchestrator | 12:35:06.378 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-02 12:35:06.378301 | orchestrator | 12:35:06.378 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.378329 | orchestrator | 12:35:06.378 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.378374 | orchestrator | 12:35:06.378 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.378406 | orchestrator | 12:35:06.378 STDOUT terraform:  + protocol = "tcp" 2025-06-02 12:35:06.378444 | orchestrator | 12:35:06.378 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.378481 | orchestrator | 12:35:06.378 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.378513 | orchestrator | 12:35:06.378 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:06.378550 | orchestrator | 12:35:06.378 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.378588 | orchestrator | 12:35:06.378 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.378610 | orchestrator | 12:35:06.378 STDOUT terraform:  } 2025-06-02 12:35:06.378668 | orchestrator | 12:35:06.378 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-02 12:35:06.378727 | orchestrator | 12:35:06.378 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-02 12:35:06.378759 | orchestrator | 12:35:06.378 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.378787 | orchestrator | 12:35:06.378 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.378826 | orchestrator | 12:35:06.378 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.378854 | orchestrator | 12:35:06.378 STDOUT terraform:  + protocol = "udp" 2025-06-02 12:35:06.378892 | orchestrator | 12:35:06.378 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.378929 | orchestrator | 12:35:06.378 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.378961 | orchestrator | 12:35:06.378 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:06.378998 | orchestrator | 12:35:06.378 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.379037 | orchestrator | 12:35:06.379 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.379059 | orchestrator | 12:35:06.379 STDOUT terraform:  } 2025-06-02 12:35:06.379117 | orchestrator | 12:35:06.379 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-02 12:35:06.379177 | orchestrator | 12:35:06.379 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-02 12:35:06.379233 | orchestrator | 12:35:06.379 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.379263 | orchestrator | 12:35:06.379 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.379302 | orchestrator | 12:35:06.379 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.379333 | orchestrator | 12:35:06.379 STDOUT terraform:  + protocol = "icmp" 2025-06-02 12:35:06.379373 | orchestrator | 12:35:06.379 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.379410 | orchestrator | 12:35:06.379 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.379443 | orchestrator | 12:35:06.379 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:06.379485 | orchestrator | 12:35:06.379 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.379524 | orchestrator | 12:35:06.379 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.379547 | orchestrator | 12:35:06.379 STDOUT terraform:  } 2025-06-02 12:35:06.379605 | orchestrator | 12:35:06.379 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-02 12:35:06.379662 | orchestrator | 12:35:06.379 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-02 12:35:06.379692 | orchestrator | 12:35:06.379 STDOUT terraform:  + description = "vrrp" 2025-06-02 12:35:06.379723 | orchestrator | 12:35:06.379 STDOUT terraform:  + direction = "ingress" 2025-06-02 12:35:06.379751 | orchestrator | 12:35:06.379 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 12:35:06.379790 | orchestrator | 12:35:06.379 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.379820 | orchestrator | 12:35:06.379 STDOUT terraform:  + protocol = "112" 2025-06-02 12:35:06.379859 | orchestrator | 12:35:06.379 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.379897 | orchestrator | 12:35:06.379 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 12:35:06.379929 | orchestrator | 12:35:06.379 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 12:35:06.379974 | orchestrator | 12:35:06.379 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 12:35:06.380012 | orchestrator | 12:35:06.379 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.380033 | orchestrator | 12:35:06.380 STDOUT terraform:  } 2025-06-02 12:35:06.380112 | orchestrator | 12:35:06.380 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-02 12:35:06.380243 | orchestrator | 12:35:06.380 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-02 12:35:06.380292 | orchestrator | 12:35:06.380 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.380341 | orchestrator | 12:35:06.380 STDOUT terraform:  + description = "management security group" 2025-06-02 12:35:06.380392 | orchestrator | 12:35:06.380 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.380428 | orchestrator | 12:35:06.380 STDOUT terraform:  + name = "testbed-management" 2025-06-02 12:35:06.380479 | orchestrator | 12:35:06.380 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.380515 | orchestrator | 12:35:06.380 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 12:35:06.380564 | orchestrator | 12:35:06.380 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.380587 | orchestrator | 12:35:06.380 STDOUT terraform:  } 2025-06-02 12:35:06.380655 | orchestrator | 12:35:06.380 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-02 12:35:06.380725 | orchestrator | 12:35:06.380 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-02 12:35:06.380778 | orchestrator | 12:35:06.380 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.380816 | orchestrator | 12:35:06.380 STDOUT terraform:  + description = "node security group" 2025-06-02 12:35:06.380874 | orchestrator | 12:35:06.380 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.380906 | orchestrator | 12:35:06.380 STDOUT terraform:  + name = "testbed-node" 2025-06-02 12:35:06.380957 | orchestrator | 12:35:06.380 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.381001 | orchestrator | 12:35:06.380 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 12:35:06.381045 | orchestrator | 12:35:06.381 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.381067 | orchestrator | 12:35:06.381 STDOUT terraform:  } 2025-06-02 12:35:06.381134 | orchestrator | 12:35:06.381 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-02 12:35:06.381214 | orchestrator | 12:35:06.381 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-02 12:35:06.381267 | orchestrator | 12:35:06.381 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 12:35:06.381306 | orchestrator | 12:35:06.381 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-02 12:35:06.381349 | orchestrator | 12:35:06.381 STDOUT terraform:  + dns_nameservers = [ 2025-06-02 12:35:06.381374 | orchestrator | 12:35:06.381 STDOUT terraform:  + "8.8.8.8", 2025-06-02 12:35:06.381412 | orchestrator | 12:35:06.381 STDOUT terraform:  + "9.9.9.9", 2025-06-02 12:35:06.381435 | orchestrator | 12:35:06.381 STDOUT terraform:  ] 2025-06-02 12:35:06.381463 | orchestrator | 12:35:06.381 STDOUT terraform:  + enable_dhcp = true 2025-06-02 12:35:06.381516 | orchestrator | 12:35:06.381 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-02 12:35:06.381568 | orchestrator | 12:35:06.381 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.381597 | orchestrator | 12:35:06.381 STDOUT terraform:  + ip_version = 4 2025-06-02 12:35:06.381648 | orchestrator | 12:35:06.381 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-02 12:35:06.381687 | orchestrator | 12:35:06.381 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-02 12:35:06.381747 | orchestrator | 12:35:06.381 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-02 12:35:06.381800 | orchestrator | 12:35:06.381 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 12:35:06.381828 | orchestrator | 12:35:06.381 STDOUT terraform:  + no_gateway = false 2025-06-02 12:35:06.381881 | orchestrator | 12:35:06.381 STDOUT terraform:  + region = (known after apply) 2025-06-02 12:35:06.381919 | orchestrator | 12:35:06.381 STDOUT terraform:  + service_types = (known after apply) 2025-06-02 12:35:06.381971 | orchestrator | 12:35:06.381 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 12:35:06.381998 | orchestrator | 12:35:06.381 STDOUT terraform:  + allocation_pool { 2025-06-02 12:35:06.382075 | orchestrator | 12:35:06.382 STDOUT terraform:  + end = "192.168.31.250" 2025-06-02 12:35:06.382123 | orchestrator | 12:35:06.382 STDOUT terraform:  + start = "192.168.31.200" 2025-06-02 12:35:06.382145 | orchestrator | 12:35:06.382 STDOUT terraform:  } 2025-06-02 12:35:06.382167 | orchestrator | 12:35:06.382 STDOUT terraform:  } 2025-06-02 12:35:06.382241 | orchestrator | 12:35:06.382 STDOUT terraform:  # terraform_data.image will be created 2025-06-02 12:35:06.382312 | orchestrator | 12:35:06.382 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-02 12:35:06.382361 | orchestrator | 12:35:06.382 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.382390 | orchestrator | 12:35:06.382 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 12:35:06.382436 | orchestrator | 12:35:06.382 STDOUT terraform:  + output = (known after apply) 2025-06-02 12:35:06.382459 | orchestrator | 12:35:06.382 STDOUT terraform:  } 2025-06-02 12:35:06.382509 | orchestrator | 12:35:06.382 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-02 12:35:06.382548 | orchestrator | 12:35:06.382 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-02 12:35:06.382595 | orchestrator | 12:35:06.382 STDOUT terraform:  + id = (known after apply) 2025-06-02 12:35:06.382624 | orchestrator | 12:35:06.382 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 12:35:06.382671 | orchestrator | 12:35:06.382 STDOUT terraform:  + output = (known after apply) 2025-06-02 12:35:06.382693 | orchestrator | 12:35:06.382 STDOUT terraform:  } 2025-06-02 12:35:06.382744 | orchestrator | 12:35:06.382 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-02 12:35:06.382766 | orchestrator | 12:35:06.382 STDOUT terraform: Changes to Outputs: 2025-06-02 12:35:06.382806 | orchestrator | 12:35:06.382 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-02 12:35:06.382844 | orchestrator | 12:35:06.382 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 12:35:06.622506 | orchestrator | 12:35:06.622 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-02 12:35:06.622590 | orchestrator | 12:35:06.622 STDOUT terraform: terraform_data.image: Creating... 2025-06-02 12:35:06.622707 | orchestrator | 12:35:06.622 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=a38d4ccf-ea54-df06-1687-447e21034eb9] 2025-06-02 12:35:06.623734 | orchestrator | 12:35:06.623 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=dd450bde-8f45-2f0e-5c24-3edafa491e33] 2025-06-02 12:35:06.641416 | orchestrator | 12:35:06.641 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-02 12:35:06.647495 | orchestrator | 12:35:06.647 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-02 12:35:06.652699 | orchestrator | 12:35:06.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-02 12:35:06.652735 | orchestrator | 12:35:06.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-02 12:35:06.652742 | orchestrator | 12:35:06.651 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-02 12:35:06.652751 | orchestrator | 12:35:06.652 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-02 12:35:06.652757 | orchestrator | 12:35:06.652 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-02 12:35:06.662836 | orchestrator | 12:35:06.662 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-02 12:35:06.662879 | orchestrator | 12:35:06.662 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-02 12:35:06.663380 | orchestrator | 12:35:06.663 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-02 12:35:07.107929 | orchestrator | 12:35:07.107 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 12:35:07.115805 | orchestrator | 12:35:07.115 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-02 12:35:07.134587 | orchestrator | 12:35:07.134 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 12:35:07.145528 | orchestrator | 12:35:07.145 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-02 12:35:07.148573 | orchestrator | 12:35:07.148 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-02 12:35:07.154323 | orchestrator | 12:35:07.154 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-02 12:35:12.613283 | orchestrator | 12:35:12.612 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=5ba575f4-d3bb-4601-a091-1900ad6a2a14] 2025-06-02 12:35:12.626373 | orchestrator | 12:35:12.625 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-02 12:35:16.653664 | orchestrator | 12:35:16.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-02 12:35:16.653793 | orchestrator | 12:35:16.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-02 12:35:16.653810 | orchestrator | 12:35:16.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-02 12:35:16.653973 | orchestrator | 12:35:16.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-02 12:35:16.664839 | orchestrator | 12:35:16.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-02 12:35:16.664933 | orchestrator | 12:35:16.664 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-02 12:35:17.116600 | orchestrator | 12:35:17.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-02 12:35:17.146839 | orchestrator | 12:35:17.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-02 12:35:17.155177 | orchestrator | 12:35:17.154 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-02 12:35:17.299056 | orchestrator | 12:35:17.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=a567a6c2-9a08-4ea9-919c-841e86dd2ba4] 2025-06-02 12:35:17.307565 | orchestrator | 12:35:17.307 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-02 12:35:17.311542 | orchestrator | 12:35:17.311 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=62086343-a56e-4adf-83a5-5e585892be27] 2025-06-02 12:35:17.319279 | orchestrator | 12:35:17.318 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-02 12:35:17.324300 | orchestrator | 12:35:17.324 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=7282e12a-1e67-4050-babb-330e265d22ff] 2025-06-02 12:35:17.324598 | orchestrator | 12:35:17.324 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=fc1422f4-0fb2-4d6b-8db4-e968df408b85] 2025-06-02 12:35:17.338094 | orchestrator | 12:35:17.337 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-02 12:35:17.340058 | orchestrator | 12:35:17.339 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-02 12:35:17.350718 | orchestrator | 12:35:17.350 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=9638a321-9046-4874-bf60-f81fe27729de] 2025-06-02 12:35:17.355940 | orchestrator | 12:35:17.355 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-02 12:35:17.369714 | orchestrator | 12:35:17.369 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=f391f369-5642-40a7-8413-d92b55d55855] 2025-06-02 12:35:17.375280 | orchestrator | 12:35:17.375 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-02 12:35:17.388034 | orchestrator | 12:35:17.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=bc902884-47f1-4f9c-b2ed-b43aad7d55f5] 2025-06-02 12:35:17.402034 | orchestrator | 12:35:17.401 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-02 12:35:17.405241 | orchestrator | 12:35:17.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=c0fd1d6c-13c9-49be-a163-e67d1493dfa5] 2025-06-02 12:35:17.406732 | orchestrator | 12:35:17.406 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=28e88f76228f8db0d029454ecbf3aded091f106f] 2025-06-02 12:35:17.415667 | orchestrator | 12:35:17.415 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=21bce83c-356f-424b-8439-404f0c7bc2da] 2025-06-02 12:35:17.417031 | orchestrator | 12:35:17.416 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-02 12:35:17.423675 | orchestrator | 12:35:17.423 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-02 12:35:17.428283 | orchestrator | 12:35:17.428 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=af0272c79cf1f44a658a42c9617c638ea9bc9eb7] 2025-06-02 12:35:22.627309 | orchestrator | 12:35:22.626 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-02 12:35:22.963496 | orchestrator | 12:35:22.963 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=2adf1974-ec50-45c6-b0e6-74793c3aa8fd] 2025-06-02 12:35:23.253713 | orchestrator | 12:35:23.253 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=46aaad58-bac8-49dd-9abe-7e86c0685ef2] 2025-06-02 12:35:23.275694 | orchestrator | 12:35:23.275 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-02 12:35:27.308564 | orchestrator | 12:35:27.308 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-02 12:35:27.320586 | orchestrator | 12:35:27.320 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 12:35:27.338930 | orchestrator | 12:35:27.338 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-02 12:35:27.341050 | orchestrator | 12:35:27.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-02 12:35:27.356424 | orchestrator | 12:35:27.356 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-02 12:35:27.375691 | orchestrator | 12:35:27.375 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 12:35:27.660110 | orchestrator | 12:35:27.659 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=f6ac710c-9f57-4dea-affc-5d36aeb63db5] 2025-06-02 12:35:27.687118 | orchestrator | 12:35:27.686 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=cb0229d0-7921-4720-b37b-7f30618f5b03] 2025-06-02 12:35:27.702927 | orchestrator | 12:35:27.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=4aa24e4c-05f0-4701-ac23-a15c2e9a093e] 2025-06-02 12:35:27.708764 | orchestrator | 12:35:27.708 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=575132ae-d287-41eb-83c3-e1274e2d90eb] 2025-06-02 12:35:27.721593 | orchestrator | 12:35:27.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=83449818-063b-4895-9e97-f9ff707e075b] 2025-06-02 12:35:27.740377 | orchestrator | 12:35:27.740 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=c8dd7ec9-2951-4d0c-9906-8a987b6e6280] 2025-06-02 12:35:31.201775 | orchestrator | 12:35:31.201 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=df8cb74e-d2e3-4c09-9f30-e824836a4c83] 2025-06-02 12:35:31.208813 | orchestrator | 12:35:31.208 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-02 12:35:31.209165 | orchestrator | 12:35:31.208 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-02 12:35:31.212156 | orchestrator | 12:35:31.211 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-02 12:35:31.380156 | orchestrator | 12:35:31.379 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=19f4a18d-2b85-4842-8be1-f0de98b4f01c] 2025-06-02 12:35:31.386094 | orchestrator | 12:35:31.385 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a04f1c66-0e5a-43d5-96e1-6ff44d624e30] 2025-06-02 12:35:31.393686 | orchestrator | 12:35:31.393 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-02 12:35:31.395080 | orchestrator | 12:35:31.394 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-02 12:35:31.395707 | orchestrator | 12:35:31.395 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-02 12:35:31.403174 | orchestrator | 12:35:31.403 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-02 12:35:31.409576 | orchestrator | 12:35:31.407 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-02 12:35:31.409627 | orchestrator | 12:35:31.407 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-02 12:35:31.409771 | orchestrator | 12:35:31.409 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-02 12:35:31.410174 | orchestrator | 12:35:31.410 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-02 12:35:31.411511 | orchestrator | 12:35:31.411 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-02 12:35:31.563353 | orchestrator | 12:35:31.562 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=a5ace3fb-4740-4fdb-b834-64d8be39a564] 2025-06-02 12:35:31.572489 | orchestrator | 12:35:31.572 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-02 12:35:31.587856 | orchestrator | 12:35:31.587 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=848cb3a4-f03e-4761-9eeb-6b63bc22b92a] 2025-06-02 12:35:31.603747 | orchestrator | 12:35:31.603 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-02 12:35:31.826273 | orchestrator | 12:35:31.825 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=04f341e3-498e-4045-a35b-5b700039285b] 2025-06-02 12:35:31.843331 | orchestrator | 12:35:31.842 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-02 12:35:31.881268 | orchestrator | 12:35:31.880 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=bfff7803-9c03-46ec-8a7c-17f08109888b] 2025-06-02 12:35:31.904451 | orchestrator | 12:35:31.904 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-02 12:35:31.972442 | orchestrator | 12:35:31.972 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f38edc13-ed6a-44ab-b656-1063b8294552] 2025-06-02 12:35:31.986378 | orchestrator | 12:35:31.986 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-02 12:35:32.133818 | orchestrator | 12:35:32.133 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=c308330d-500d-4abe-8ef3-a81fbe91b5ea] 2025-06-02 12:35:32.152910 | orchestrator | 12:35:32.152 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-02 12:35:32.208339 | orchestrator | 12:35:32.208 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=d4b7b2f4-c084-4400-8a2b-1e4b3741e8aa] 2025-06-02 12:35:32.222268 | orchestrator | 12:35:32.221 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-02 12:35:32.312792 | orchestrator | 12:35:32.312 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=7fa58f5f-6a65-4ef4-be12-81c8f33d9d8e] 2025-06-02 12:35:32.361022 | orchestrator | 12:35:32.360 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=e024a054-5b81-4173-9770-f98885171b03] 2025-06-02 12:35:37.077701 | orchestrator | 12:35:37.077 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=c71508c4-db66-4173-b425-5f9b06abaf01] 2025-06-02 12:35:37.205864 | orchestrator | 12:35:37.205 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=86ce23f3-e5c3-408e-b5d1-476265112df7] 2025-06-02 12:35:37.438865 | orchestrator | 12:35:37.438 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=715bf2e4-92b9-48fb-a1f7-adb5de76e4a4] 2025-06-02 12:35:37.538185 | orchestrator | 12:35:37.537 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=a45774dc-4246-481a-8a11-f6bd4749c740] 2025-06-02 12:35:37.675635 | orchestrator | 12:35:37.675 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=0156630e-d53a-43f7-95b5-72f541d93749] 2025-06-02 12:35:37.885061 | orchestrator | 12:35:37.884 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=09c930e0-bde0-467b-9ee1-4772701eb920] 2025-06-02 12:35:38.238861 | orchestrator | 12:35:38.238 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=ac2bfc39-691d-40dc-9465-a35faab4ff3d] 2025-06-02 12:35:38.434370 | orchestrator | 12:35:38.433 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=38d74f6b-90f7-4927-9122-a7496d32fa31] 2025-06-02 12:35:38.453596 | orchestrator | 12:35:38.453 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-02 12:35:38.471090 | orchestrator | 12:35:38.470 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-02 12:35:38.479327 | orchestrator | 12:35:38.479 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-02 12:35:38.481664 | orchestrator | 12:35:38.481 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-02 12:35:38.486343 | orchestrator | 12:35:38.486 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-02 12:35:38.492631 | orchestrator | 12:35:38.492 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-02 12:35:38.497060 | orchestrator | 12:35:38.496 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-02 12:35:45.192433 | orchestrator | 12:35:45.192 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=5996318f-ef61-492f-8135-62939da95f98] 2025-06-02 12:35:45.203521 | orchestrator | 12:35:45.203 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-02 12:35:45.209467 | orchestrator | 12:35:45.209 STDOUT terraform: local_file.inventory: Creating... 2025-06-02 12:35:45.214837 | orchestrator | 12:35:45.214 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-02 12:35:45.216740 | orchestrator | 12:35:45.216 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=7cf0c8d8f4b2aac73f002d271904e91569f0d425] 2025-06-02 12:35:45.220831 | orchestrator | 12:35:45.220 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=84b203c562793153a14075e34f57a5381937c9f9] 2025-06-02 12:35:46.379528 | orchestrator | 12:35:46.379 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=5996318f-ef61-492f-8135-62939da95f98] 2025-06-02 12:35:48.472620 | orchestrator | 12:35:48.472 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-02 12:35:48.485102 | orchestrator | 12:35:48.484 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-02 12:35:48.486395 | orchestrator | 12:35:48.486 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-02 12:35:48.488836 | orchestrator | 12:35:48.488 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-02 12:35:48.497943 | orchestrator | 12:35:48.497 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-02 12:35:48.498124 | orchestrator | 12:35:48.497 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-02 12:35:58.473597 | orchestrator | 12:35:58.473 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-02 12:35:58.485898 | orchestrator | 12:35:58.485 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-02 12:35:58.486827 | orchestrator | 12:35:58.486 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-02 12:35:58.489073 | orchestrator | 12:35:58.488 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-02 12:35:58.498441 | orchestrator | 12:35:58.498 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-02 12:35:58.498564 | orchestrator | 12:35:58.498 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-02 12:35:58.975960 | orchestrator | 12:35:58.975 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=b6a8f62d-291f-4f51-8c9d-049d6b154a76] 2025-06-02 12:36:08.475536 | orchestrator | 12:36:08.475 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-02 12:36:08.486574 | orchestrator | 12:36:08.486 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-02 12:36:08.489876 | orchestrator | 12:36:08.489 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-02 12:36:08.499161 | orchestrator | 12:36:08.498 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-02 12:36:08.499333 | orchestrator | 12:36:08.499 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-02 12:36:08.933602 | orchestrator | 12:36:08.933 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=45c697dd-80ca-4e34-9d81-d95c7d3762a2] 2025-06-02 12:36:08.936122 | orchestrator | 12:36:08.935 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=ca2842b8-59b3-419b-9dd8-454692b97661] 2025-06-02 12:36:08.961255 | orchestrator | 12:36:08.960 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=8a9686c0-4c10-459e-90bc-9399481b8dd6] 2025-06-02 12:36:08.987011 | orchestrator | 12:36:08.986 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=8ef20750-09ef-4ac7-8672-3e649d79cb8c] 2025-06-02 12:36:09.412564 | orchestrator | 12:36:09.412 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=372f2877-6c84-4f90-bb17-812ab09cdd31] 2025-06-02 12:36:09.423586 | orchestrator | 12:36:09.423 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-02 12:36:09.440351 | orchestrator | 12:36:09.440 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-02 12:36:09.440846 | orchestrator | 12:36:09.440 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-02 12:36:09.446940 | orchestrator | 12:36:09.446 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-02 12:36:09.452374 | orchestrator | 12:36:09.452 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-02 12:36:09.459987 | orchestrator | 12:36:09.459 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8565196008919640863] 2025-06-02 12:36:09.462927 | orchestrator | 12:36:09.462 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-02 12:36:09.462968 | orchestrator | 12:36:09.462 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-02 12:36:09.463590 | orchestrator | 12:36:09.463 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-02 12:36:09.489223 | orchestrator | 12:36:09.489 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-02 12:36:09.489264 | orchestrator | 12:36:09.489 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-02 12:36:09.494813 | orchestrator | 12:36:09.494 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-02 12:36:14.781998 | orchestrator | 12:36:14.781 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=8a9686c0-4c10-459e-90bc-9399481b8dd6/21bce83c-356f-424b-8439-404f0c7bc2da] 2025-06-02 12:36:14.786972 | orchestrator | 12:36:14.786 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=ca2842b8-59b3-419b-9dd8-454692b97661/fc1422f4-0fb2-4d6b-8db4-e968df408b85] 2025-06-02 12:36:14.805610 | orchestrator | 12:36:14.805 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=ca2842b8-59b3-419b-9dd8-454692b97661/bc902884-47f1-4f9c-b2ed-b43aad7d55f5] 2025-06-02 12:36:14.816685 | orchestrator | 12:36:14.816 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=372f2877-6c84-4f90-bb17-812ab09cdd31/a567a6c2-9a08-4ea9-919c-841e86dd2ba4] 2025-06-02 12:36:14.844453 | orchestrator | 12:36:14.843 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=8a9686c0-4c10-459e-90bc-9399481b8dd6/f391f369-5642-40a7-8413-d92b55d55855] 2025-06-02 12:36:14.870233 | orchestrator | 12:36:14.869 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=ca2842b8-59b3-419b-9dd8-454692b97661/62086343-a56e-4adf-83a5-5e585892be27] 2025-06-02 12:36:14.887831 | orchestrator | 12:36:14.887 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=8a9686c0-4c10-459e-90bc-9399481b8dd6/9638a321-9046-4874-bf60-f81fe27729de] 2025-06-02 12:36:14.904261 | orchestrator | 12:36:14.903 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=372f2877-6c84-4f90-bb17-812ab09cdd31/c0fd1d6c-13c9-49be-a163-e67d1493dfa5] 2025-06-02 12:36:14.922487 | orchestrator | 12:36:14.922 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=372f2877-6c84-4f90-bb17-812ab09cdd31/7282e12a-1e67-4050-babb-330e265d22ff] 2025-06-02 12:36:19.496451 | orchestrator | 12:36:19.496 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-02 12:36:29.497548 | orchestrator | 12:36:29.497 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-02 12:36:29.888602 | orchestrator | 12:36:29.888 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=6c735cd2-baba-4e5d-959a-501a31fffae8] 2025-06-02 12:36:29.928174 | orchestrator | 12:36:29.927 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-02 12:36:29.928266 | orchestrator | 12:36:29.928 STDOUT terraform: Outputs: 2025-06-02 12:36:29.928274 | orchestrator | 12:36:29.928 STDOUT terraform: manager_address = 2025-06-02 12:36:29.928431 | orchestrator | 12:36:29.928 STDOUT terraform: private_key = 2025-06-02 12:36:30.009879 | orchestrator | ok: Runtime: 0:01:33.129114 2025-06-02 12:36:30.047261 | 2025-06-02 12:36:30.047426 | TASK [Fetch manager address] 2025-06-02 12:36:30.524911 | orchestrator | ok 2025-06-02 12:36:30.535528 | 2025-06-02 12:36:30.535673 | TASK [Set manager_host address] 2025-06-02 12:36:30.620371 | orchestrator | ok 2025-06-02 12:36:30.632017 | 2025-06-02 12:36:30.632248 | LOOP [Update ansible collections] 2025-06-02 12:36:31.510447 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:36:31.510818 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 12:36:31.510921 | orchestrator | Starting galaxy collection install process 2025-06-02 12:36:31.510959 | orchestrator | Process install dependency map 2025-06-02 12:36:31.510992 | orchestrator | Starting collection install process 2025-06-02 12:36:31.511022 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-06-02 12:36:31.511089 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-06-02 12:36:31.511128 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-02 12:36:31.511218 | orchestrator | ok: Item: commons Runtime: 0:00:00.554178 2025-06-02 12:36:32.343033 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:36:32.343215 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 12:36:32.343270 | orchestrator | Starting galaxy collection install process 2025-06-02 12:36:32.343312 | orchestrator | Process install dependency map 2025-06-02 12:36:32.343352 | orchestrator | Starting collection install process 2025-06-02 12:36:32.343388 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-06-02 12:36:32.343425 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-06-02 12:36:32.343460 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-02 12:36:32.343517 | orchestrator | ok: Item: services Runtime: 0:00:00.579411 2025-06-02 12:36:32.376382 | 2025-06-02 12:36:32.376702 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 12:36:42.979626 | orchestrator | ok 2025-06-02 12:36:42.992957 | 2025-06-02 12:36:42.993218 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 12:37:43.039859 | orchestrator | ok 2025-06-02 12:37:43.051638 | 2025-06-02 12:37:43.051786 | TASK [Fetch manager ssh hostkey] 2025-06-02 12:37:44.630591 | orchestrator | Output suppressed because no_log was given 2025-06-02 12:37:44.644770 | 2025-06-02 12:37:44.644946 | TASK [Get ssh keypair from terraform environment] 2025-06-02 12:37:45.186880 | orchestrator | ok: Runtime: 0:00:00.017242 2025-06-02 12:37:45.205111 | 2025-06-02 12:37:45.205296 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 12:37:45.243388 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-02 12:37:45.254153 | 2025-06-02 12:37:45.254309 | TASK [Run manager part 0] 2025-06-02 12:37:46.384202 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:37:46.437824 | orchestrator | 2025-06-02 12:37:46.437886 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-02 12:37:46.437899 | orchestrator | 2025-06-02 12:37:46.437917 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-02 12:37:48.002397 | orchestrator | ok: [testbed-manager] 2025-06-02 12:37:48.002446 | orchestrator | 2025-06-02 12:37:48.002465 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 12:37:48.002474 | orchestrator | 2025-06-02 12:37:48.002483 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:37:49.821569 | orchestrator | ok: [testbed-manager] 2025-06-02 12:37:49.821738 | orchestrator | 2025-06-02 12:37:49.821750 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 12:37:50.422840 | orchestrator | ok: [testbed-manager] 2025-06-02 12:37:50.422892 | orchestrator | 2025-06-02 12:37:50.422900 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 12:37:50.462767 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:50.462836 | orchestrator | 2025-06-02 12:37:50.462856 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-02 12:37:50.497158 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:50.497209 | orchestrator | 2025-06-02 12:37:50.497218 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 12:37:50.524455 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:50.524502 | orchestrator | 2025-06-02 12:37:50.524508 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 12:37:50.548517 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:50.548552 | orchestrator | 2025-06-02 12:37:50.548557 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 12:37:50.575051 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:50.575088 | orchestrator | 2025-06-02 12:37:50.575095 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-02 12:37:50.602860 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:50.602895 | orchestrator | 2025-06-02 12:37:50.602902 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-02 12:37:50.640560 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:37:50.640614 | orchestrator | 2025-06-02 12:37:50.640624 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-02 12:37:51.451671 | orchestrator | changed: [testbed-manager] 2025-06-02 12:37:51.451744 | orchestrator | 2025-06-02 12:37:51.451756 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-02 12:40:48.749343 | orchestrator | changed: [testbed-manager] 2025-06-02 12:40:48.749415 | orchestrator | 2025-06-02 12:40:48.749430 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 12:42:00.324903 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:00.324957 | orchestrator | 2025-06-02 12:42:00.324966 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 12:42:23.868905 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:23.869011 | orchestrator | 2025-06-02 12:42:23.869030 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 12:42:32.201353 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:32.201450 | orchestrator | 2025-06-02 12:42:32.201466 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 12:42:32.250700 | orchestrator | ok: [testbed-manager] 2025-06-02 12:42:32.250774 | orchestrator | 2025-06-02 12:42:32.250785 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-02 12:42:33.022271 | orchestrator | ok: [testbed-manager] 2025-06-02 12:42:33.022363 | orchestrator | 2025-06-02 12:42:33.022381 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-02 12:42:33.738817 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:33.738900 | orchestrator | 2025-06-02 12:42:33.738914 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-02 12:42:40.018568 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:40.018608 | orchestrator | 2025-06-02 12:42:40.018628 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-02 12:42:45.910743 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:45.910810 | orchestrator | 2025-06-02 12:42:45.910821 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-02 12:42:48.339792 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:48.339974 | orchestrator | 2025-06-02 12:42:48.339992 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-02 12:42:50.029706 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:50.029793 | orchestrator | 2025-06-02 12:42:50.029808 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-02 12:42:51.134687 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 12:42:51.134739 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 12:42:51.134749 | orchestrator | 2025-06-02 12:42:51.134758 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-02 12:42:51.177573 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 12:42:51.177659 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 12:42:51.177674 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 12:42:51.177687 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 12:42:56.593068 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 12:42:56.593138 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 12:42:56.593147 | orchestrator | 2025-06-02 12:42:56.593156 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-02 12:42:57.150292 | orchestrator | changed: [testbed-manager] 2025-06-02 12:42:57.150333 | orchestrator | 2025-06-02 12:42:57.150346 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-02 12:46:17.665251 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-02 12:46:17.665378 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-02 12:46:17.665406 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-02 12:46:17.665420 | orchestrator | 2025-06-02 12:46:17.665432 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-02 12:46:20.002611 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-02 12:46:20.002705 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-02 12:46:20.002737 | orchestrator | 2025-06-02 12:46:20.002750 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-02 12:46:20.002763 | orchestrator | 2025-06-02 12:46:20.002774 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:46:21.411158 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:21.411225 | orchestrator | 2025-06-02 12:46:21.411240 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 12:46:21.457794 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:21.457864 | orchestrator | 2025-06-02 12:46:21.457878 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 12:46:21.565657 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:21.565770 | orchestrator | 2025-06-02 12:46:21.565787 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 12:46:22.305559 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:22.305655 | orchestrator | 2025-06-02 12:46:22.305674 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 12:46:23.041573 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:23.041660 | orchestrator | 2025-06-02 12:46:23.041678 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 12:46:24.396259 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-02 12:46:24.396350 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-02 12:46:24.396365 | orchestrator | 2025-06-02 12:46:24.396393 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 12:46:25.803506 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:25.803564 | orchestrator | 2025-06-02 12:46:25.803573 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 12:46:27.525972 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:46:27.526008 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-02 12:46:27.526040 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:46:27.526047 | orchestrator | 2025-06-02 12:46:27.526053 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 12:46:28.086639 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:28.086752 | orchestrator | 2025-06-02 12:46:28.086769 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 12:46:28.157421 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:28.157485 | orchestrator | 2025-06-02 12:46:28.157501 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 12:46:29.038637 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:46:29.038759 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:29.038769 | orchestrator | 2025-06-02 12:46:29.038777 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 12:46:29.081568 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:29.081657 | orchestrator | 2025-06-02 12:46:29.081665 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 12:46:29.118602 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:29.118692 | orchestrator | 2025-06-02 12:46:29.118729 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 12:46:29.157016 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:29.157091 | orchestrator | 2025-06-02 12:46:29.157103 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 12:46:29.209024 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:29.209093 | orchestrator | 2025-06-02 12:46:29.209104 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 12:46:29.916991 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:29.917085 | orchestrator | 2025-06-02 12:46:29.917101 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 12:46:29.917114 | orchestrator | 2025-06-02 12:46:29.917129 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:46:31.298767 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:31.298867 | orchestrator | 2025-06-02 12:46:31.298893 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-02 12:46:32.239248 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:32.240081 | orchestrator | 2025-06-02 12:46:32.240109 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:46:32.240123 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 12:46:32.240135 | orchestrator | 2025-06-02 12:46:32.613876 | orchestrator | ok: Runtime: 0:08:46.798283 2025-06-02 12:46:32.631655 | 2025-06-02 12:46:32.631793 | TASK [Point out that the log in on the manager is now possible] 2025-06-02 12:46:32.681098 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-02 12:46:32.691409 | 2025-06-02 12:46:32.691535 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 12:46:32.729227 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-02 12:46:32.741343 | 2025-06-02 12:46:32.741477 | TASK [Run manager part 1 + 2] 2025-06-02 12:46:33.597366 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 12:46:33.650507 | orchestrator | 2025-06-02 12:46:33.650554 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-02 12:46:33.650561 | orchestrator | 2025-06-02 12:46:33.650573 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:46:36.565296 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:36.565348 | orchestrator | 2025-06-02 12:46:36.565370 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 12:46:36.603500 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:36.603545 | orchestrator | 2025-06-02 12:46:36.603554 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 12:46:36.650308 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:36.650358 | orchestrator | 2025-06-02 12:46:36.650371 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 12:46:36.690584 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:36.690631 | orchestrator | 2025-06-02 12:46:36.690641 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 12:46:36.752449 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:36.752499 | orchestrator | 2025-06-02 12:46:36.752507 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 12:46:36.809783 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:36.809831 | orchestrator | 2025-06-02 12:46:36.809840 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 12:46:36.850588 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-02 12:46:36.850630 | orchestrator | 2025-06-02 12:46:36.850636 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 12:46:37.553192 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:37.553248 | orchestrator | 2025-06-02 12:46:37.553258 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 12:46:37.597953 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:37.598007 | orchestrator | 2025-06-02 12:46:37.598036 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 12:46:38.906807 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:38.906862 | orchestrator | 2025-06-02 12:46:38.906871 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 12:46:39.485955 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:39.486009 | orchestrator | 2025-06-02 12:46:39.486051 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 12:46:40.619052 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:40.619103 | orchestrator | 2025-06-02 12:46:40.619112 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 12:46:52.180332 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:52.180397 | orchestrator | 2025-06-02 12:46:52.180412 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 12:46:52.826583 | orchestrator | ok: [testbed-manager] 2025-06-02 12:46:52.826769 | orchestrator | 2025-06-02 12:46:52.826790 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 12:46:52.877805 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:46:52.877856 | orchestrator | 2025-06-02 12:46:52.877863 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-02 12:46:53.821714 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:53.821811 | orchestrator | 2025-06-02 12:46:53.821827 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-02 12:46:54.808925 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:54.809014 | orchestrator | 2025-06-02 12:46:54.809031 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-02 12:46:55.384818 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:55.384900 | orchestrator | 2025-06-02 12:46:55.384917 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-02 12:46:55.429058 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 12:46:55.429122 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 12:46:55.429129 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 12:46:55.429134 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 12:46:58.884883 | orchestrator | changed: [testbed-manager] 2025-06-02 12:46:58.884981 | orchestrator | 2025-06-02 12:46:58.884998 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-02 12:47:07.663561 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-02 12:47:07.663653 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-02 12:47:07.663675 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-02 12:47:07.663680 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-02 12:47:07.663689 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-02 12:47:07.663694 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-02 12:47:07.663698 | orchestrator | 2025-06-02 12:47:07.663703 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-02 12:47:08.694541 | orchestrator | changed: [testbed-manager] 2025-06-02 12:47:08.694625 | orchestrator | 2025-06-02 12:47:08.694641 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-02 12:47:08.739807 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:47:08.739873 | orchestrator | 2025-06-02 12:47:08.739884 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-02 12:47:11.760589 | orchestrator | changed: [testbed-manager] 2025-06-02 12:47:11.760714 | orchestrator | 2025-06-02 12:47:11.760733 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-02 12:47:11.807611 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:47:11.807716 | orchestrator | 2025-06-02 12:47:11.807733 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-02 12:48:44.979546 | orchestrator | changed: [testbed-manager] 2025-06-02 12:48:44.979601 | orchestrator | 2025-06-02 12:48:44.979609 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 12:48:46.072873 | orchestrator | ok: [testbed-manager] 2025-06-02 12:48:46.072962 | orchestrator | 2025-06-02 12:48:46.072994 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:48:46.073008 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-02 12:48:46.073033 | orchestrator | 2025-06-02 12:48:46.364370 | orchestrator | ok: Runtime: 0:02:13.103146 2025-06-02 12:48:46.383366 | 2025-06-02 12:48:46.383537 | TASK [Reboot manager] 2025-06-02 12:48:47.927214 | orchestrator | ok: Runtime: 0:00:00.992292 2025-06-02 12:48:47.938664 | 2025-06-02 12:48:47.938808 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 12:49:01.700812 | orchestrator | ok 2025-06-02 12:49:01.711936 | 2025-06-02 12:49:01.712132 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 12:50:01.756715 | orchestrator | ok 2025-06-02 12:50:01.766135 | 2025-06-02 12:50:01.766270 | TASK [Deploy manager + bootstrap nodes] 2025-06-02 12:50:04.209916 | orchestrator | 2025-06-02 12:50:04.210275 | orchestrator | # DEPLOY MANAGER 2025-06-02 12:50:04.210310 | orchestrator | 2025-06-02 12:50:04.210325 | orchestrator | + set -e 2025-06-02 12:50:04.210339 | orchestrator | + echo 2025-06-02 12:50:04.210353 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-02 12:50:04.210371 | orchestrator | + echo 2025-06-02 12:50:04.210421 | orchestrator | + cat /opt/manager-vars.sh 2025-06-02 12:50:04.213706 | orchestrator | export NUMBER_OF_NODES=6 2025-06-02 12:50:04.213751 | orchestrator | 2025-06-02 12:50:04.213764 | orchestrator | export CEPH_VERSION=reef 2025-06-02 12:50:04.213778 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-02 12:50:04.213791 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-02 12:50:04.213815 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-02 12:50:04.213827 | orchestrator | 2025-06-02 12:50:04.213846 | orchestrator | export ARA=false 2025-06-02 12:50:04.213858 | orchestrator | export DEPLOY_MODE=manager 2025-06-02 12:50:04.213876 | orchestrator | export TEMPEST=false 2025-06-02 12:50:04.213888 | orchestrator | export IS_ZUUL=true 2025-06-02 12:50:04.213900 | orchestrator | 2025-06-02 12:50:04.213918 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 12:50:04.213930 | orchestrator | export EXTERNAL_API=false 2025-06-02 12:50:04.213941 | orchestrator | 2025-06-02 12:50:04.213951 | orchestrator | export IMAGE_USER=ubuntu 2025-06-02 12:50:04.213967 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-02 12:50:04.213978 | orchestrator | 2025-06-02 12:50:04.213990 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-02 12:50:04.214009 | orchestrator | 2025-06-02 12:50:04.214070 | orchestrator | + echo 2025-06-02 12:50:04.214095 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 12:50:04.215044 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 12:50:04.215068 | orchestrator | ++ INTERACTIVE=false 2025-06-02 12:50:04.215082 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 12:50:04.215101 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 12:50:04.215251 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 12:50:04.215274 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 12:50:04.215288 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 12:50:04.215299 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 12:50:04.215310 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 12:50:04.215321 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 12:50:04.215335 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 12:50:04.215352 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 12:50:04.215369 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 12:50:04.215381 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 12:50:04.215403 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 12:50:04.215419 | orchestrator | ++ export ARA=false 2025-06-02 12:50:04.215430 | orchestrator | ++ ARA=false 2025-06-02 12:50:04.215442 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 12:50:04.215453 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 12:50:04.215463 | orchestrator | ++ export TEMPEST=false 2025-06-02 12:50:04.215474 | orchestrator | ++ TEMPEST=false 2025-06-02 12:50:04.215517 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 12:50:04.215529 | orchestrator | ++ IS_ZUUL=true 2025-06-02 12:50:04.215540 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 12:50:04.215552 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 12:50:04.215567 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 12:50:04.215578 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 12:50:04.215589 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 12:50:04.215599 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 12:50:04.215610 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 12:50:04.215621 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 12:50:04.215632 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 12:50:04.215644 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 12:50:04.215664 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-02 12:50:04.266986 | orchestrator | + docker version 2025-06-02 12:50:04.515905 | orchestrator | Client: Docker Engine - Community 2025-06-02 12:50:04.516022 | orchestrator | Version: 27.5.1 2025-06-02 12:50:04.516041 | orchestrator | API version: 1.47 2025-06-02 12:50:04.516053 | orchestrator | Go version: go1.22.11 2025-06-02 12:50:04.516066 | orchestrator | Git commit: 9f9e405 2025-06-02 12:50:04.516077 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 12:50:04.516090 | orchestrator | OS/Arch: linux/amd64 2025-06-02 12:50:04.516101 | orchestrator | Context: default 2025-06-02 12:50:04.516112 | orchestrator | 2025-06-02 12:50:04.516123 | orchestrator | Server: Docker Engine - Community 2025-06-02 12:50:04.516134 | orchestrator | Engine: 2025-06-02 12:50:04.516146 | orchestrator | Version: 27.5.1 2025-06-02 12:50:04.516157 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-02 12:50:04.516198 | orchestrator | Go version: go1.22.11 2025-06-02 12:50:04.516210 | orchestrator | Git commit: 4c9b3b0 2025-06-02 12:50:04.516221 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 12:50:04.516231 | orchestrator | OS/Arch: linux/amd64 2025-06-02 12:50:04.516242 | orchestrator | Experimental: false 2025-06-02 12:50:04.516253 | orchestrator | containerd: 2025-06-02 12:50:04.516264 | orchestrator | Version: 1.7.27 2025-06-02 12:50:04.516276 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-02 12:50:04.516287 | orchestrator | runc: 2025-06-02 12:50:04.516298 | orchestrator | Version: 1.2.5 2025-06-02 12:50:04.516309 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-02 12:50:04.516320 | orchestrator | docker-init: 2025-06-02 12:50:04.516331 | orchestrator | Version: 0.19.0 2025-06-02 12:50:04.516343 | orchestrator | GitCommit: de40ad0 2025-06-02 12:50:04.518663 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-02 12:50:04.529821 | orchestrator | + set -e 2025-06-02 12:50:04.529880 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 12:50:04.529893 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 12:50:04.529904 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 12:50:04.529915 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 12:50:04.529926 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 12:50:04.529937 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 12:50:04.529949 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 12:50:04.529959 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 12:50:04.529970 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 12:50:04.529981 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 12:50:04.529992 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 12:50:04.530003 | orchestrator | ++ export ARA=false 2025-06-02 12:50:04.530081 | orchestrator | ++ ARA=false 2025-06-02 12:50:04.530095 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 12:50:04.530107 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 12:50:04.530117 | orchestrator | ++ export TEMPEST=false 2025-06-02 12:50:04.530128 | orchestrator | ++ TEMPEST=false 2025-06-02 12:50:04.530139 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 12:50:04.530150 | orchestrator | ++ IS_ZUUL=true 2025-06-02 12:50:04.530161 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 12:50:04.530172 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 12:50:04.530183 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 12:50:04.530195 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 12:50:04.530205 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 12:50:04.530216 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 12:50:04.530228 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 12:50:04.530239 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 12:50:04.530250 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 12:50:04.530261 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 12:50:04.530272 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 12:50:04.530283 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 12:50:04.530294 | orchestrator | ++ INTERACTIVE=false 2025-06-02 12:50:04.530305 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 12:50:04.530321 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 12:50:04.530340 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 12:50:04.530353 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-02 12:50:04.537753 | orchestrator | + set -e 2025-06-02 12:50:04.537813 | orchestrator | + VERSION=9.1.0 2025-06-02 12:50:04.537827 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 12:50:04.545920 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 12:50:04.545972 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 12:50:04.550631 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-02 12:50:04.554062 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-02 12:50:04.562296 | orchestrator | /opt/configuration ~ 2025-06-02 12:50:04.562337 | orchestrator | + set -e 2025-06-02 12:50:04.562350 | orchestrator | + pushd /opt/configuration 2025-06-02 12:50:04.562362 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 12:50:04.564936 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 12:50:04.565870 | orchestrator | ++ deactivate nondestructive 2025-06-02 12:50:04.565889 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:04.565912 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:04.565954 | orchestrator | ++ hash -r 2025-06-02 12:50:04.565966 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:04.565977 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 12:50:04.565988 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 12:50:04.565999 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 12:50:04.566116 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 12:50:04.566137 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 12:50:04.566148 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 12:50:04.566166 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 12:50:04.566179 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:50:04.566199 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:50:04.566211 | orchestrator | ++ export PATH 2025-06-02 12:50:04.566222 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:04.566233 | orchestrator | ++ '[' -z '' ']' 2025-06-02 12:50:04.566244 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 12:50:04.566254 | orchestrator | ++ PS1='(venv) ' 2025-06-02 12:50:04.566265 | orchestrator | ++ export PS1 2025-06-02 12:50:04.566276 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 12:50:04.566287 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 12:50:04.566303 | orchestrator | ++ hash -r 2025-06-02 12:50:04.566324 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-02 12:50:05.587565 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-02 12:50:05.588186 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-06-02 12:50:05.589561 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-02 12:50:05.590813 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-02 12:50:05.591839 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-02 12:50:05.601646 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-02 12:50:05.603023 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-02 12:50:05.604022 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-02 12:50:05.605329 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-02 12:50:05.635403 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-02 12:50:05.636900 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-02 12:50:05.638465 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-06-02 12:50:05.639885 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-06-02 12:50:05.643857 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-02 12:50:05.845759 | orchestrator | ++ which gilt 2025-06-02 12:50:05.850238 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-02 12:50:05.850343 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-02 12:50:06.058176 | orchestrator | osism.cfg-generics: 2025-06-02 12:50:06.211698 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-02 12:50:06.212455 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-02 12:50:06.213695 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-02 12:50:06.213721 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-02 12:50:06.955987 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-02 12:50:06.978013 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-02 12:50:07.450544 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-02 12:50:07.502178 | orchestrator | ~ 2025-06-02 12:50:07.502339 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 12:50:07.502398 | orchestrator | + deactivate 2025-06-02 12:50:07.502429 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 12:50:07.502459 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:50:07.502536 | orchestrator | + export PATH 2025-06-02 12:50:07.502555 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 12:50:07.502575 | orchestrator | + '[' -n '' ']' 2025-06-02 12:50:07.502605 | orchestrator | + hash -r 2025-06-02 12:50:07.502630 | orchestrator | + '[' -n '' ']' 2025-06-02 12:50:07.502649 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 12:50:07.502667 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 12:50:07.502683 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 12:50:07.502699 | orchestrator | + unset -f deactivate 2025-06-02 12:50:07.502716 | orchestrator | + popd 2025-06-02 12:50:07.503834 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-02 12:50:07.503885 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-02 12:50:07.504908 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 12:50:07.564246 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 12:50:07.564355 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-02 12:50:07.564372 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-02 12:50:07.609950 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 12:50:07.610167 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 12:50:07.610188 | orchestrator | ++ deactivate nondestructive 2025-06-02 12:50:07.610201 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:07.610213 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:07.610224 | orchestrator | ++ hash -r 2025-06-02 12:50:07.610235 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:07.610246 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 12:50:07.610257 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 12:50:07.610268 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 12:50:07.610279 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 12:50:07.610291 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 12:50:07.610302 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 12:50:07.610313 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 12:50:07.610325 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:50:07.610337 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:50:07.610382 | orchestrator | ++ export PATH 2025-06-02 12:50:07.610395 | orchestrator | ++ '[' -n '' ']' 2025-06-02 12:50:07.610420 | orchestrator | ++ '[' -z '' ']' 2025-06-02 12:50:07.610431 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 12:50:07.610442 | orchestrator | ++ PS1='(venv) ' 2025-06-02 12:50:07.610453 | orchestrator | ++ export PS1 2025-06-02 12:50:07.610464 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 12:50:07.610474 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 12:50:07.610514 | orchestrator | ++ hash -r 2025-06-02 12:50:07.610526 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-02 12:50:08.654855 | orchestrator | 2025-06-02 12:50:08.654991 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-02 12:50:08.655018 | orchestrator | 2025-06-02 12:50:08.655037 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 12:50:09.225854 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:09.225951 | orchestrator | 2025-06-02 12:50:09.225964 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 12:50:10.234769 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:10.234870 | orchestrator | 2025-06-02 12:50:10.234885 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-02 12:50:10.234897 | orchestrator | 2025-06-02 12:50:10.234907 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:50:12.734749 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:12.734877 | orchestrator | 2025-06-02 12:50:12.734895 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-02 12:50:12.787804 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:12.787911 | orchestrator | 2025-06-02 12:50:12.787927 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-02 12:50:13.278792 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:13.278901 | orchestrator | 2025-06-02 12:50:13.278921 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-02 12:50:13.325061 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:13.325149 | orchestrator | 2025-06-02 12:50:13.325164 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 12:50:13.669618 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:13.669720 | orchestrator | 2025-06-02 12:50:13.669734 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-02 12:50:13.732224 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:13.732315 | orchestrator | 2025-06-02 12:50:13.732329 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-02 12:50:14.094740 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:14.094835 | orchestrator | 2025-06-02 12:50:14.094849 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-02 12:50:14.222193 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:14.222413 | orchestrator | 2025-06-02 12:50:14.222453 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-02 12:50:14.222467 | orchestrator | 2025-06-02 12:50:14.222520 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:50:15.989919 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:15.990007 | orchestrator | 2025-06-02 12:50:15.990060 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-02 12:50:16.086544 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-02 12:50:16.086647 | orchestrator | 2025-06-02 12:50:16.086662 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-02 12:50:16.143283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-02 12:50:16.143377 | orchestrator | 2025-06-02 12:50:16.143398 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-02 12:50:17.243336 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-02 12:50:17.243446 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-02 12:50:17.243465 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-02 12:50:17.243523 | orchestrator | 2025-06-02 12:50:17.243536 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-02 12:50:19.042914 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-02 12:50:19.043029 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-02 12:50:19.043045 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-02 12:50:19.043058 | orchestrator | 2025-06-02 12:50:19.043072 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-02 12:50:19.656791 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:50:19.656891 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:19.656905 | orchestrator | 2025-06-02 12:50:19.656915 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-02 12:50:20.306993 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:50:20.307086 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:20.307098 | orchestrator | 2025-06-02 12:50:20.307108 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-02 12:50:20.365074 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:20.365166 | orchestrator | 2025-06-02 12:50:20.365180 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-02 12:50:20.764901 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:20.764996 | orchestrator | 2025-06-02 12:50:20.765010 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-02 12:50:20.841954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-02 12:50:20.842109 | orchestrator | 2025-06-02 12:50:20.842128 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-02 12:50:21.865654 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:21.865777 | orchestrator | 2025-06-02 12:50:21.865803 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-02 12:50:22.626001 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:22.626152 | orchestrator | 2025-06-02 12:50:22.626172 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-02 12:50:32.985927 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:32.986108 | orchestrator | 2025-06-02 12:50:32.986148 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-02 12:50:33.039589 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:33.039683 | orchestrator | 2025-06-02 12:50:33.039698 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-02 12:50:33.039710 | orchestrator | 2025-06-02 12:50:33.039722 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:50:34.913737 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:34.913847 | orchestrator | 2025-06-02 12:50:34.913865 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-02 12:50:35.021042 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-02 12:50:35.021164 | orchestrator | 2025-06-02 12:50:35.021180 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-02 12:50:35.077424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 12:50:35.077584 | orchestrator | 2025-06-02 12:50:35.077600 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-02 12:50:37.441744 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:37.441849 | orchestrator | 2025-06-02 12:50:37.441864 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-02 12:50:37.495174 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:37.495268 | orchestrator | 2025-06-02 12:50:37.495282 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-02 12:50:37.621618 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-02 12:50:37.621718 | orchestrator | 2025-06-02 12:50:37.621733 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-02 12:50:40.378411 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-02 12:50:40.378584 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-02 12:50:40.378601 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-02 12:50:40.378614 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-02 12:50:40.378626 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-02 12:50:40.378637 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-02 12:50:40.378648 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-02 12:50:40.378660 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-02 12:50:40.378672 | orchestrator | 2025-06-02 12:50:40.378686 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-02 12:50:41.021143 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:41.021239 | orchestrator | 2025-06-02 12:50:41.021254 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-02 12:50:41.638913 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:41.639012 | orchestrator | 2025-06-02 12:50:41.639027 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-02 12:50:41.719672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-02 12:50:41.719757 | orchestrator | 2025-06-02 12:50:41.719771 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-02 12:50:42.889284 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-02 12:50:42.889393 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-02 12:50:42.889409 | orchestrator | 2025-06-02 12:50:42.889422 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-02 12:50:43.500567 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:43.500672 | orchestrator | 2025-06-02 12:50:43.500689 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-02 12:50:43.538693 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:43.538788 | orchestrator | 2025-06-02 12:50:43.538813 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-02 12:50:43.586140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-02 12:50:43.586229 | orchestrator | 2025-06-02 12:50:43.586244 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-02 12:50:44.903296 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:50:44.903386 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 12:50:44.903396 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:44.903405 | orchestrator | 2025-06-02 12:50:44.903412 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-02 12:50:45.503572 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:45.503676 | orchestrator | 2025-06-02 12:50:45.503693 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-02 12:50:45.550489 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:45.550533 | orchestrator | 2025-06-02 12:50:45.550545 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-02 12:50:45.636098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-02 12:50:45.636198 | orchestrator | 2025-06-02 12:50:45.636214 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-02 12:50:46.146533 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:46.146642 | orchestrator | 2025-06-02 12:50:46.146661 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-02 12:50:46.535550 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:46.535637 | orchestrator | 2025-06-02 12:50:46.535648 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-02 12:50:47.722707 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-02 12:50:47.722820 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-02 12:50:47.722836 | orchestrator | 2025-06-02 12:50:47.722849 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-02 12:50:48.354120 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:48.354217 | orchestrator | 2025-06-02 12:50:48.354233 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-02 12:50:48.742614 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:48.742717 | orchestrator | 2025-06-02 12:50:48.742732 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-02 12:50:49.094163 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:49.094263 | orchestrator | 2025-06-02 12:50:49.094278 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-02 12:50:49.143541 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:49.143632 | orchestrator | 2025-06-02 12:50:49.143646 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-02 12:50:49.213564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-02 12:50:49.213658 | orchestrator | 2025-06-02 12:50:49.213672 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-02 12:50:49.250737 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:49.250799 | orchestrator | 2025-06-02 12:50:49.250811 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-02 12:50:51.182085 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-02 12:50:51.182251 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-02 12:50:51.182275 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-02 12:50:51.182290 | orchestrator | 2025-06-02 12:50:51.182307 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-02 12:50:51.867751 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:51.867867 | orchestrator | 2025-06-02 12:50:51.867883 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-02 12:50:52.563975 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:52.564089 | orchestrator | 2025-06-02 12:50:52.564107 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-02 12:50:53.275110 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:53.275215 | orchestrator | 2025-06-02 12:50:53.275231 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-02 12:50:53.352768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-02 12:50:53.352866 | orchestrator | 2025-06-02 12:50:53.352881 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-02 12:50:53.397516 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:53.397604 | orchestrator | 2025-06-02 12:50:53.397626 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-02 12:50:54.073770 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-02 12:50:54.073903 | orchestrator | 2025-06-02 12:50:54.073929 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-02 12:50:54.161161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-02 12:50:54.161257 | orchestrator | 2025-06-02 12:50:54.161270 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-02 12:50:54.853694 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:54.853797 | orchestrator | 2025-06-02 12:50:54.853814 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-02 12:50:55.455562 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:55.455656 | orchestrator | 2025-06-02 12:50:55.455672 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-02 12:50:55.507898 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:50:55.507976 | orchestrator | 2025-06-02 12:50:55.507992 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-02 12:50:55.569391 | orchestrator | ok: [testbed-manager] 2025-06-02 12:50:55.569520 | orchestrator | 2025-06-02 12:50:55.569536 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-02 12:50:56.352017 | orchestrator | changed: [testbed-manager] 2025-06-02 12:50:56.352116 | orchestrator | 2025-06-02 12:50:56.352131 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-02 12:51:55.818457 | orchestrator | changed: [testbed-manager] 2025-06-02 12:51:55.818583 | orchestrator | 2025-06-02 12:51:55.818603 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-02 12:51:56.748737 | orchestrator | ok: [testbed-manager] 2025-06-02 12:51:56.748844 | orchestrator | 2025-06-02 12:51:56.748860 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-02 12:51:56.798871 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:51:56.798944 | orchestrator | 2025-06-02 12:51:56.798958 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-02 12:51:59.229451 | orchestrator | changed: [testbed-manager] 2025-06-02 12:51:59.229560 | orchestrator | 2025-06-02 12:51:59.229576 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-02 12:51:59.287117 | orchestrator | ok: [testbed-manager] 2025-06-02 12:51:59.287227 | orchestrator | 2025-06-02 12:51:59.287250 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 12:51:59.287270 | orchestrator | 2025-06-02 12:51:59.287282 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-02 12:51:59.342240 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:51:59.342338 | orchestrator | 2025-06-02 12:51:59.342423 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-02 12:52:59.393062 | orchestrator | Pausing for 60 seconds 2025-06-02 12:52:59.393184 | orchestrator | changed: [testbed-manager] 2025-06-02 12:52:59.393200 | orchestrator | 2025-06-02 12:52:59.393213 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-02 12:53:03.463926 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:03.464035 | orchestrator | 2025-06-02 12:53:03.464053 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-02 12:53:45.040153 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-02 12:53:45.040271 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-02 12:53:45.040322 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:45.040336 | orchestrator | 2025-06-02 12:53:45.040348 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-02 12:53:53.344802 | orchestrator | changed: [testbed-manager] 2025-06-02 12:53:53.344926 | orchestrator | 2025-06-02 12:53:53.344967 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-02 12:53:53.432215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-02 12:53:53.432393 | orchestrator | 2025-06-02 12:53:53.432411 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 12:53:53.432424 | orchestrator | 2025-06-02 12:53:53.432436 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-02 12:53:53.489380 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:53:53.489444 | orchestrator | 2025-06-02 12:53:53.489457 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:53:53.489470 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 12:53:53.489481 | orchestrator | 2025-06-02 12:53:53.581531 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 12:53:53.581615 | orchestrator | + deactivate 2025-06-02 12:53:53.581629 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 12:53:53.581642 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 12:53:53.581653 | orchestrator | + export PATH 2025-06-02 12:53:53.581669 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 12:53:53.581682 | orchestrator | + '[' -n '' ']' 2025-06-02 12:53:53.581694 | orchestrator | + hash -r 2025-06-02 12:53:53.581704 | orchestrator | + '[' -n '' ']' 2025-06-02 12:53:53.581716 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 12:53:53.581726 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 12:53:53.581737 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 12:53:53.581748 | orchestrator | + unset -f deactivate 2025-06-02 12:53:53.581759 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-02 12:53:53.587864 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 12:53:53.587891 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 12:53:53.587902 | orchestrator | + local max_attempts=60 2025-06-02 12:53:53.587913 | orchestrator | + local name=ceph-ansible 2025-06-02 12:53:53.587924 | orchestrator | + local attempt_num=1 2025-06-02 12:53:53.588505 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 12:53:53.614558 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 12:53:53.614640 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 12:53:53.614660 | orchestrator | + local max_attempts=60 2025-06-02 12:53:53.614680 | orchestrator | + local name=kolla-ansible 2025-06-02 12:53:53.614698 | orchestrator | + local attempt_num=1 2025-06-02 12:53:53.614965 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 12:53:53.646320 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 12:53:53.646383 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 12:53:53.646397 | orchestrator | + local max_attempts=60 2025-06-02 12:53:53.646409 | orchestrator | + local name=osism-ansible 2025-06-02 12:53:53.646421 | orchestrator | + local attempt_num=1 2025-06-02 12:53:53.646939 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 12:53:53.681487 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 12:53:53.681551 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 12:53:53.681566 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 12:53:54.371433 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-02 12:53:54.575613 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-02 12:53:54.575718 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575736 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575749 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-02 12:53:54.575810 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-02 12:53:54.575821 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575832 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575843 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-02 12:53:54.575854 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575864 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-02 12:53:54.575875 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575886 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-02 12:53:54.575896 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575907 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575917 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.575928 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-02 12:53:54.583652 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 12:53:54.642206 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 12:53:54.642366 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-02 12:53:54.646392 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-02 12:53:56.316610 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:53:56.316712 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:53:56.316726 | orchestrator | Registering Redlock._release_script 2025-06-02 12:53:56.507958 | orchestrator | 2025-06-02 12:53:56 | INFO  | Task e4f279a0-9282-4fac-b503-3881f84b25ca (resolvconf) was prepared for execution. 2025-06-02 12:53:56.508056 | orchestrator | 2025-06-02 12:53:56 | INFO  | It takes a moment until task e4f279a0-9282-4fac-b503-3881f84b25ca (resolvconf) has been started and output is visible here. 2025-06-02 12:53:59.990090 | orchestrator | 2025-06-02 12:53:59.990916 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-02 12:53:59.991184 | orchestrator | 2025-06-02 12:53:59.992738 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:53:59.993673 | orchestrator | Monday 02 June 2025 12:53:59 +0000 (0:00:00.105) 0:00:00.105 *********** 2025-06-02 12:54:03.246793 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:03.247052 | orchestrator | 2025-06-02 12:54:03.247076 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 12:54:03.247885 | orchestrator | Monday 02 June 2025 12:54:03 +0000 (0:00:03.257) 0:00:03.363 *********** 2025-06-02 12:54:03.302336 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:54:03.302522 | orchestrator | 2025-06-02 12:54:03.303173 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 12:54:03.303607 | orchestrator | Monday 02 June 2025 12:54:03 +0000 (0:00:00.056) 0:00:03.420 *********** 2025-06-02 12:54:03.361044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-02 12:54:03.361528 | orchestrator | 2025-06-02 12:54:03.362120 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 12:54:03.362727 | orchestrator | Monday 02 June 2025 12:54:03 +0000 (0:00:00.059) 0:00:03.479 *********** 2025-06-02 12:54:03.411325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 12:54:03.411789 | orchestrator | 2025-06-02 12:54:03.412406 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 12:54:03.413238 | orchestrator | Monday 02 June 2025 12:54:03 +0000 (0:00:00.050) 0:00:03.530 *********** 2025-06-02 12:54:04.378643 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:04.379300 | orchestrator | 2025-06-02 12:54:04.382502 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 12:54:04.383146 | orchestrator | Monday 02 June 2025 12:54:04 +0000 (0:00:00.965) 0:00:04.495 *********** 2025-06-02 12:54:04.440475 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:54:04.440574 | orchestrator | 2025-06-02 12:54:04.441308 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 12:54:04.441385 | orchestrator | Monday 02 June 2025 12:54:04 +0000 (0:00:00.062) 0:00:04.558 *********** 2025-06-02 12:54:04.941736 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:04.941843 | orchestrator | 2025-06-02 12:54:04.941861 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 12:54:04.941874 | orchestrator | Monday 02 June 2025 12:54:04 +0000 (0:00:00.501) 0:00:05.059 *********** 2025-06-02 12:54:05.017613 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:54:05.018114 | orchestrator | 2025-06-02 12:54:05.018825 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 12:54:05.019531 | orchestrator | Monday 02 June 2025 12:54:05 +0000 (0:00:00.076) 0:00:05.135 *********** 2025-06-02 12:54:05.503846 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:05.504239 | orchestrator | 2025-06-02 12:54:05.505033 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 12:54:05.505487 | orchestrator | Monday 02 June 2025 12:54:05 +0000 (0:00:00.483) 0:00:05.619 *********** 2025-06-02 12:54:06.471701 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:06.472073 | orchestrator | 2025-06-02 12:54:06.473351 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 12:54:06.473496 | orchestrator | Monday 02 June 2025 12:54:06 +0000 (0:00:00.969) 0:00:06.588 *********** 2025-06-02 12:54:07.398116 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:07.398380 | orchestrator | 2025-06-02 12:54:07.399080 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 12:54:07.399719 | orchestrator | Monday 02 June 2025 12:54:07 +0000 (0:00:00.925) 0:00:07.513 *********** 2025-06-02 12:54:07.491092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-02 12:54:07.491237 | orchestrator | 2025-06-02 12:54:07.492116 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 12:54:07.492709 | orchestrator | Monday 02 June 2025 12:54:07 +0000 (0:00:00.094) 0:00:07.608 *********** 2025-06-02 12:54:08.585878 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:08.585992 | orchestrator | 2025-06-02 12:54:08.586008 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:54:08.586491 | orchestrator | 2025-06-02 12:54:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:54:08.586519 | orchestrator | 2025-06-02 12:54:08 | INFO  | Please wait and do not abort execution. 2025-06-02 12:54:08.587798 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 12:54:08.588436 | orchestrator | 2025-06-02 12:54:08.589075 | orchestrator | 2025-06-02 12:54:08.589875 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:54:08.590538 | orchestrator | Monday 02 June 2025 12:54:08 +0000 (0:00:01.091) 0:00:08.699 *********** 2025-06-02 12:54:08.591192 | orchestrator | =============================================================================== 2025-06-02 12:54:08.591572 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2025-06-02 12:54:08.592029 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2025-06-02 12:54:08.592428 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.97s 2025-06-02 12:54:08.593028 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.97s 2025-06-02 12:54:08.593363 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2025-06-02 12:54:08.593873 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-06-02 12:54:08.594184 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2025-06-02 12:54:08.594526 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-06-02 12:54:08.595039 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-02 12:54:08.595434 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-02 12:54:08.595884 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.06s 2025-06-02 12:54:08.596368 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-02 12:54:08.596699 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.05s 2025-06-02 12:54:08.991344 | orchestrator | + osism apply sshconfig 2025-06-02 12:54:10.613845 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:54:10.613968 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:54:10.613984 | orchestrator | Registering Redlock._release_script 2025-06-02 12:54:10.667452 | orchestrator | 2025-06-02 12:54:10 | INFO  | Task 765b8e0a-52d2-4c61-b7bb-c36953569b66 (sshconfig) was prepared for execution. 2025-06-02 12:54:10.667569 | orchestrator | 2025-06-02 12:54:10 | INFO  | It takes a moment until task 765b8e0a-52d2-4c61-b7bb-c36953569b66 (sshconfig) has been started and output is visible here. 2025-06-02 12:54:14.443782 | orchestrator | 2025-06-02 12:54:14.444458 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-02 12:54:14.445400 | orchestrator | 2025-06-02 12:54:14.446479 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-02 12:54:14.447421 | orchestrator | Monday 02 June 2025 12:54:14 +0000 (0:00:00.156) 0:00:00.156 *********** 2025-06-02 12:54:14.988151 | orchestrator | ok: [testbed-manager] 2025-06-02 12:54:14.989922 | orchestrator | 2025-06-02 12:54:14.989955 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-02 12:54:14.990306 | orchestrator | Monday 02 June 2025 12:54:14 +0000 (0:00:00.546) 0:00:00.703 *********** 2025-06-02 12:54:15.472155 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:15.472720 | orchestrator | 2025-06-02 12:54:15.473555 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-02 12:54:15.474608 | orchestrator | Monday 02 June 2025 12:54:15 +0000 (0:00:00.485) 0:00:01.188 *********** 2025-06-02 12:54:20.906006 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-02 12:54:20.906404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-02 12:54:20.907207 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-02 12:54:20.908418 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-02 12:54:20.909343 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-02 12:54:20.910323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-02 12:54:20.911409 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-02 12:54:20.912398 | orchestrator | 2025-06-02 12:54:20.913778 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-02 12:54:20.914618 | orchestrator | Monday 02 June 2025 12:54:20 +0000 (0:00:05.432) 0:00:06.620 *********** 2025-06-02 12:54:20.962945 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:54:20.963586 | orchestrator | 2025-06-02 12:54:20.964470 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-02 12:54:20.964873 | orchestrator | Monday 02 June 2025 12:54:20 +0000 (0:00:00.059) 0:00:06.680 *********** 2025-06-02 12:54:21.539093 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:21.539745 | orchestrator | 2025-06-02 12:54:21.540655 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:54:21.540687 | orchestrator | 2025-06-02 12:54:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:54:21.541032 | orchestrator | 2025-06-02 12:54:21 | INFO  | Please wait and do not abort execution. 2025-06-02 12:54:21.541871 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 12:54:21.542661 | orchestrator | 2025-06-02 12:54:21.543637 | orchestrator | 2025-06-02 12:54:21.544091 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:54:21.544642 | orchestrator | Monday 02 June 2025 12:54:21 +0000 (0:00:00.576) 0:00:07.256 *********** 2025-06-02 12:54:21.545461 | orchestrator | =============================================================================== 2025-06-02 12:54:21.546189 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.43s 2025-06-02 12:54:21.546733 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-06-02 12:54:21.546924 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-06-02 12:54:21.547455 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-06-02 12:54:21.547808 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-06-02 12:54:21.957817 | orchestrator | + osism apply known-hosts 2025-06-02 12:54:23.581846 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:54:23.581948 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:54:23.581962 | orchestrator | Registering Redlock._release_script 2025-06-02 12:54:23.636721 | orchestrator | 2025-06-02 12:54:23 | INFO  | Task 93e5a981-82ac-4956-a3cc-ee5afc630882 (known-hosts) was prepared for execution. 2025-06-02 12:54:23.637224 | orchestrator | 2025-06-02 12:54:23 | INFO  | It takes a moment until task 93e5a981-82ac-4956-a3cc-ee5afc630882 (known-hosts) has been started and output is visible here. 2025-06-02 12:54:27.475790 | orchestrator | 2025-06-02 12:54:27.477047 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-02 12:54:27.477103 | orchestrator | 2025-06-02 12:54:27.478088 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-02 12:54:27.478545 | orchestrator | Monday 02 June 2025 12:54:27 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-06-02 12:54:33.313210 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 12:54:33.313386 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 12:54:33.313562 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 12:54:33.314372 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 12:54:33.315946 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 12:54:33.316393 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 12:54:33.317454 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 12:54:33.318404 | orchestrator | 2025-06-02 12:54:33.319210 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-02 12:54:33.320078 | orchestrator | Monday 02 June 2025 12:54:33 +0000 (0:00:05.837) 0:00:06.000 *********** 2025-06-02 12:54:33.464359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 12:54:33.466130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 12:54:33.466815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 12:54:33.467685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 12:54:33.468404 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 12:54:33.469036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 12:54:33.469587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 12:54:33.470366 | orchestrator | 2025-06-02 12:54:33.470950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:33.471454 | orchestrator | Monday 02 June 2025 12:54:33 +0000 (0:00:00.153) 0:00:06.153 *********** 2025-06-02 12:54:34.593024 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBTEW1A1jguRIDE2ATC6ZUNqBfbxgrrgK+4JxJJzNpBfOFBXRNlMQq5KrTYhfLHbnY8PLGRyU130cPRFpS/evas=) 2025-06-02 12:54:34.593135 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7QdUEjTPd02cBkO3j6HQkYIWotK0WmH4AHrZ69S+OKOU8pmg1bPSowCuxVwvIk9rI+iXj/aEVA/866YzpTB58bvrXvsylwLcP3/rQVrcn94K2LQHj8+z94y238GB3/bdT3CHfnsLwMSQDlEA8217gGrnmkRNqpQnO1tNygcGeB92STiOojBM4/eij1Ni/IZB+5cqB4bhm/Ds9WmmRDLgqGdEEtgHSIBwjG/yKT/YWGl9ehWfP0R1pnoWZmvyFfIkJh1ff/FG0VJID+7IvBn4QyszKIRnBikntTXwcwUdAfUIONF161ioMVoXmgOYpbHeNBmtG+zE8E7r52P7Z2QZuMDnldxxbvQfTD6hXBwcRLIdusETeflVwANW0bb4d2dTzDaiNK9khgu4IpJxQd09tgvNASZRRcBRjGqbyvwzib9YoI9HmMWKVfRqPWLfg+EGRS6+C7COljnN8u2ypZTOwYrHVg6E+sExs5vjdB1EexnQBJQfe5XSyLv/O6fRYzrU=) 2025-06-02 12:54:34.593182 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHbzV16Vim1sEi8z5ECVtjk8zZKXebx64eCJWYdmsUC+) 2025-06-02 12:54:34.593506 | orchestrator | 2025-06-02 12:54:34.594172 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:34.594741 | orchestrator | Monday 02 June 2025 12:54:34 +0000 (0:00:01.124) 0:00:07.278 *********** 2025-06-02 12:54:35.616132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/Q6TYytD67rIVEkAu6K2eK7Z+P5NlXA3uAeDEJI4mE6vc9A5h/RxJa/6eMOTs1ycGsB3G/3i5WLHQwkbZWC148YEwEZgVhlbw8nCc3HBTV5GkOBveCszYAGgYCme9l4WiLfKY1I0gK0h//tGf7z4Hr17Zn6wRegu8huARlmAkKPFj3nyl6iI9dHCJeWSAtpnyJU+K31q5FjKgtBdcCA2xTugNzLMAuCCAKt3HNQ9eRXqwXUWAFP0LsX4ti7j8M+QXHUgDVO6JOzGUVerOt3L3wrQpRzh3aPYEdOkHIsjDM3qJXkbfML7AKKYevodbtpE3luGmCK3fFIN5tTalSbzvOOkwYbVmp9GfxZiiG06LvrVh/Wpg0G7ZI2VbRhcG5Cgyoe8buA7ZUKb0sHZJ9HXwM7nA608J+cLO7lCpfJQmYepJUMFLaDRA1hmRtKZDXRWm+VtmOW6ODT/2tFCzWx5P1IynIAj8fjEYZ6dhjER6qV1jynnDfebO0My5bUYqhOM=) 2025-06-02 12:54:35.616476 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGkepNU1xAIdmHjhMPos+7pNlaNsPvjVaZXH7ltgG3KtoP7uwqbpq3AciT1ui4aPfZUuah1GTscHUcqqnx9gi+c=) 2025-06-02 12:54:35.617141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK0G/kvxZpllpnaHyjTDlKjgJoc1yFQFoXqFDMu8QDBr) 2025-06-02 12:54:35.617616 | orchestrator | 2025-06-02 12:54:35.618719 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:35.618882 | orchestrator | Monday 02 June 2025 12:54:35 +0000 (0:00:01.025) 0:00:08.304 *********** 2025-06-02 12:54:36.633875 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSrMSgs7kYrv1TTwWrJPnHiJwphWWF0ER2E8ZBNoz6ekLyQo5On2SyFLvw2wgDruOZ/M9QUFEsR3MgQKNZRgFhhLwsjvdEXmqnAWOTmgF9CXKpc/uug3fJDycKMYqzuG3IdH31IEjO4X4+YlTaDaHpq+EUHRxBF/JnMxCGjp0uYnqDTV4vOCHtu4CFcNKpF5Yz3fP/Qm2rRp95b/49hOua+lGDX4EUKa1ujiyr776JIBzsNISCYATgbDoS4AIF4lpZCngUZAcvML+lsZvdHqUy+7NFZ6+fpbV2EMQS7/3mFXKx1LOxe7VDObtLucWZCMOxFw0At/WeAhX6pMExv7urzEzVPrLsrg1gNimNBjHj65CXFgldO/kj6wjypy/roG5URu+mwSt/BEnBotrfhAbgfiyIeGNFIKq6ipbmYKSrL0vPd9pbNUL59WncDFx/YwsKKbUKas8vn906s3Ii5bk5Dz4e0NVC94I9iz7kqiuTXBwNX/qRXyGFE+yT3sT3Bo8=) 2025-06-02 12:54:36.633980 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZziWMz2pqTvSu8FhlT8iy9JKMIhk4EGGh1veJa0ipuIE/lRZ2+znmJiVGbH+Cfsfbu/bzjvkovIy2Grwtul3s=) 2025-06-02 12:54:36.635009 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9ChqKFtZpsdR3zwxlXfOgYDOobNf9a0zaogGIO4dBh) 2025-06-02 12:54:36.635932 | orchestrator | 2025-06-02 12:54:36.636853 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:36.637531 | orchestrator | Monday 02 June 2025 12:54:36 +0000 (0:00:01.017) 0:00:09.321 *********** 2025-06-02 12:54:37.662361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoGOiflt5DKtwJVfVV3XStkG/vnT1RSpa9+vE5r14MsN+0WAlJc0irbVlcFI9KIIyAWl3HSVrY+W2n/ejeRKq5aCCSmD14zrteXWlknds+jAPNsFeO7yRsoaPDO2LhNyQIQGPTJhPFhKlTOjTy3VGuIjKEs6bQZFcZHgoOxteRTWxf1G6M588+0ysESlw/PkmvcEHxb7FVSUP0untj5/mKLVpb8aQnxE+uKu+VOVs3xWnBdY1mQlmRSsvEQMuX0z45stO2a7EjeX+tr/EHpF2ZidyddDufGKzSrlNCMBb5V5QW1aw2DQpAG60lG5+4efsMSMHS4GD51N4BYeVFSjKEqOidISTDkgYh0aeXZql9ntWU88yZngSt80Ck+XYswdmH4Ndcu5lg12PBlj5QRE13wbW11QLsz4sHeDsbIjGAzjyLcB4V6QmnsQc39sdywLaGOui1jMjQ0MCcEk+edaugm37ctPDXVxW71YyLDpX56/z9B+3D/QF42V7k6G1goHM=) 2025-06-02 12:54:37.662642 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIWdXOtX9iMCNiA56RxdI7+HBjBmik1Rb1C5zUtfM80p) 2025-06-02 12:54:37.663113 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNJUG2TXJjmeoemNNhbUb0Lfer3OWh0T/cT5vgWUrKGNWPcGcHYVJufKLHEGWBxBU2XeYdpTGT+42H/i39tSjZw=) 2025-06-02 12:54:37.663544 | orchestrator | 2025-06-02 12:54:37.664142 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:37.664438 | orchestrator | Monday 02 June 2025 12:54:37 +0000 (0:00:01.029) 0:00:10.351 *********** 2025-06-02 12:54:38.695646 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDADw0F61IoY2hDGnx4BWSWYuGR6EuyteeaNfIumrj6i) 2025-06-02 12:54:38.697460 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAH3k4PY9rxFRsDhJOQPGxfLm7kbZSaL+4i21xvpwDGGJVuiDkixxCJ8jtd3svvoYbnWEFE3ROhKG4fKnDpqqSENLYzxaQ/w1OheoxOzaE3XF0U2hnIR858BeS/ZccvRdh/xvyKPMK/1/VKrFkgwih4WwWuK7BQSAl11G7xkZK4e4jWHQMcUaUZJTaefKm9RVVvD1AhexIou8ScZK2FLSIk30McDL543KoBFbvAQWH720dIo+zzKKEAu3QL0cU5AyX9PgO70lqC00LZX0JhwQLGKFM3n2bDCWPVOd0gj7QU6SGJpRp1uVCUnrclxVnaEsOT8EIjQw9siX29B3KKDDlly04QCb9Cge9VVchzATR0wJvg7lpjkTyALxacj9IG42ORLEzyctBvuhs8Bm0C6LMmPm9FpRcPYBIaKPb38q7rFsjFpS+p1JsDN+e5SZBsoT+k0T6IIWvP2EjftSBIc3+hQ+Rt4sDpo0ZM8NCcHN3TxsDnYn9dmnZL3bs3UcstsM=) 2025-06-02 12:54:38.697495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP4H9QCUgy4NeAHH47Nu/DSio8TlGsAeg4NyGPLsoWZVtFBXuNZl2hcMFTmqaPA3ZKf6gahILnd1OZ9YNLkVi/w=) 2025-06-02 12:54:38.697921 | orchestrator | 2025-06-02 12:54:38.699045 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:38.699452 | orchestrator | Monday 02 June 2025 12:54:38 +0000 (0:00:01.033) 0:00:11.384 *********** 2025-06-02 12:54:39.688905 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj0IChiDVbVFx21dQk4+taT+ZR45xaNrqDVlfkCFJ6HrVNmDjHYeEJwrB64GQoGcRASlNeJRqm8Co6ilsVO8NR9GljAbaVHVltuAOWWoBxytPKWR8nHNca/Xj3M1CMNyEdHQAgVGiWPdCmStH2MsiZ7xVt5oznEB3rP8grYOrh7d0FYDQi99UsoEiABFb0Bxl6srRH/uV5kQD1mtcaYZ8hmhYeDwAG4OqZtCTAI+3nYMZQYAi8LjCpkUTK+KClIhpy3ohuUq1fRxpsvefpKKfCAu5CNnn5AtZuRHVNbTB2QZTMAW4xXq7a/GRa6yO2pjeJXSjpaY1nVbrmh8V3nJ89ESGylWwrh0ZlikfY+qZofYfjz1ykIy2XXeXbCaYEBOT3/n8oVlykUp4Yzm+SiujM2QJb2ECDQIzpMZoztKLQ2gICsbG7sJwFluVSe1yTzvyLQiQ4HsIY0YID3rrXFM1T8/zFPLX9FQ+dFxfPgnJsIgIkD8jrQfcft6ZBUxakRCE=) 2025-06-02 12:54:39.689109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOoKLwwsUyM1UckeceKsXb+or1H1hQuGmzCYZrSlLvrR) 2025-06-02 12:54:39.690640 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDXrTuWRR+edMP8MOjgHYlivLXprOmHdo4seFJM/uLWAWvQ6Dt667esKY7SNN4DeC8VPl0OjhraNSIow6St1rk4=) 2025-06-02 12:54:39.691402 | orchestrator | 2025-06-02 12:54:39.691986 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:39.692853 | orchestrator | Monday 02 June 2025 12:54:39 +0000 (0:00:00.992) 0:00:12.377 *********** 2025-06-02 12:54:40.696127 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ50Fyq2A7caPORwhjNDNoX89HWBRFC+JfcdybH/+gv/T51txhxvDv2F7ZhCu7YTpft7hCbCw0zdj6AVsn35Igo=) 2025-06-02 12:54:40.696792 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ04OaNR8VdTCPu2UwAx+iDoCutA3Bdi5ctMuodUDQyu) 2025-06-02 12:54:40.698430 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDddAqDu42hbPvl3M6MRsPt6OOIGNsrAikRa58tRE14CDQhz7SfDoueaaJEZHmhR48ILyzqTnHCeO59f0uwAeun6Rb6uAjZ+6+JZ+Hkyj71bzL39tQAyVSTIfCjjSTmjEmbOBYZX70qb0dW++NcfUd1llAzOnW1crRmGRLAAydjgwwj6n4zFUbtDvUjxm3FHuHg3WaypfnGHFlqnW1MY+qsW+QXYXcvVFr9YhoTPbbxEX1/aHgN8xcWai+E6UhYcntZbtnqOq2H67oJoD5TKoQUrosqTW79ZjifedoObBhTVxk3rjRBoywHvi/5zvXwjF/IVNNAlKb+wlgYx2OA4gox7IiFTj+Cj/z8+tXdEFY6mDbiCRjffpjKa0fEC4mlTRBJM0eWPAtmTMyOzDoRX/AzUwWlvTfirGVjNtrUiyX1zrR++X7epiBCYS2AyyxkwRIOxAlZZosC7cADBA9K9ju959hU5Z8y3TlAtzESEMkHExq8EdB3intC6vT3LEwJRM=) 2025-06-02 12:54:40.698580 | orchestrator | 2025-06-02 12:54:40.699330 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-02 12:54:40.699865 | orchestrator | Monday 02 June 2025 12:54:40 +0000 (0:00:01.007) 0:00:13.384 *********** 2025-06-02 12:54:45.857857 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 12:54:45.858727 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 12:54:45.862225 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 12:54:45.862442 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 12:54:45.864903 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 12:54:45.866576 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 12:54:45.867395 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 12:54:45.868083 | orchestrator | 2025-06-02 12:54:45.869111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-02 12:54:45.869850 | orchestrator | Monday 02 June 2025 12:54:45 +0000 (0:00:05.160) 0:00:18.544 *********** 2025-06-02 12:54:46.004449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 12:54:46.005027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 12:54:46.006175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 12:54:46.007209 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 12:54:46.008377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 12:54:46.008909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 12:54:46.009672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 12:54:46.010402 | orchestrator | 2025-06-02 12:54:46.011285 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:46.011847 | orchestrator | Monday 02 June 2025 12:54:45 +0000 (0:00:00.149) 0:00:18.694 *********** 2025-06-02 12:54:47.007333 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHbzV16Vim1sEi8z5ECVtjk8zZKXebx64eCJWYdmsUC+) 2025-06-02 12:54:47.008173 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7QdUEjTPd02cBkO3j6HQkYIWotK0WmH4AHrZ69S+OKOU8pmg1bPSowCuxVwvIk9rI+iXj/aEVA/866YzpTB58bvrXvsylwLcP3/rQVrcn94K2LQHj8+z94y238GB3/bdT3CHfnsLwMSQDlEA8217gGrnmkRNqpQnO1tNygcGeB92STiOojBM4/eij1Ni/IZB+5cqB4bhm/Ds9WmmRDLgqGdEEtgHSIBwjG/yKT/YWGl9ehWfP0R1pnoWZmvyFfIkJh1ff/FG0VJID+7IvBn4QyszKIRnBikntTXwcwUdAfUIONF161ioMVoXmgOYpbHeNBmtG+zE8E7r52P7Z2QZuMDnldxxbvQfTD6hXBwcRLIdusETeflVwANW0bb4d2dTzDaiNK9khgu4IpJxQd09tgvNASZRRcBRjGqbyvwzib9YoI9HmMWKVfRqPWLfg+EGRS6+C7COljnN8u2ypZTOwYrHVg6E+sExs5vjdB1EexnQBJQfe5XSyLv/O6fRYzrU=) 2025-06-02 12:54:47.008622 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBTEW1A1jguRIDE2ATC6ZUNqBfbxgrrgK+4JxJJzNpBfOFBXRNlMQq5KrTYhfLHbnY8PLGRyU130cPRFpS/evas=) 2025-06-02 12:54:47.008977 | orchestrator | 2025-06-02 12:54:47.009611 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:47.010094 | orchestrator | Monday 02 June 2025 12:54:47 +0000 (0:00:01.001) 0:00:19.695 *********** 2025-06-02 12:54:47.994493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/Q6TYytD67rIVEkAu6K2eK7Z+P5NlXA3uAeDEJI4mE6vc9A5h/RxJa/6eMOTs1ycGsB3G/3i5WLHQwkbZWC148YEwEZgVhlbw8nCc3HBTV5GkOBveCszYAGgYCme9l4WiLfKY1I0gK0h//tGf7z4Hr17Zn6wRegu8huARlmAkKPFj3nyl6iI9dHCJeWSAtpnyJU+K31q5FjKgtBdcCA2xTugNzLMAuCCAKt3HNQ9eRXqwXUWAFP0LsX4ti7j8M+QXHUgDVO6JOzGUVerOt3L3wrQpRzh3aPYEdOkHIsjDM3qJXkbfML7AKKYevodbtpE3luGmCK3fFIN5tTalSbzvOOkwYbVmp9GfxZiiG06LvrVh/Wpg0G7ZI2VbRhcG5Cgyoe8buA7ZUKb0sHZJ9HXwM7nA608J+cLO7lCpfJQmYepJUMFLaDRA1hmRtKZDXRWm+VtmOW6ODT/2tFCzWx5P1IynIAj8fjEYZ6dhjER6qV1jynnDfebO0My5bUYqhOM=) 2025-06-02 12:54:47.994671 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGkepNU1xAIdmHjhMPos+7pNlaNsPvjVaZXH7ltgG3KtoP7uwqbpq3AciT1ui4aPfZUuah1GTscHUcqqnx9gi+c=) 2025-06-02 12:54:47.996364 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK0G/kvxZpllpnaHyjTDlKjgJoc1yFQFoXqFDMu8QDBr) 2025-06-02 12:54:47.996734 | orchestrator | 2025-06-02 12:54:47.997209 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:47.997549 | orchestrator | Monday 02 June 2025 12:54:47 +0000 (0:00:00.987) 0:00:20.683 *********** 2025-06-02 12:54:49.013149 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSrMSgs7kYrv1TTwWrJPnHiJwphWWF0ER2E8ZBNoz6ekLyQo5On2SyFLvw2wgDruOZ/M9QUFEsR3MgQKNZRgFhhLwsjvdEXmqnAWOTmgF9CXKpc/uug3fJDycKMYqzuG3IdH31IEjO4X4+YlTaDaHpq+EUHRxBF/JnMxCGjp0uYnqDTV4vOCHtu4CFcNKpF5Yz3fP/Qm2rRp95b/49hOua+lGDX4EUKa1ujiyr776JIBzsNISCYATgbDoS4AIF4lpZCngUZAcvML+lsZvdHqUy+7NFZ6+fpbV2EMQS7/3mFXKx1LOxe7VDObtLucWZCMOxFw0At/WeAhX6pMExv7urzEzVPrLsrg1gNimNBjHj65CXFgldO/kj6wjypy/roG5URu+mwSt/BEnBotrfhAbgfiyIeGNFIKq6ipbmYKSrL0vPd9pbNUL59WncDFx/YwsKKbUKas8vn906s3Ii5bk5Dz4e0NVC94I9iz7kqiuTXBwNX/qRXyGFE+yT3sT3Bo8=) 2025-06-02 12:54:49.013398 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHZziWMz2pqTvSu8FhlT8iy9JKMIhk4EGGh1veJa0ipuIE/lRZ2+znmJiVGbH+Cfsfbu/bzjvkovIy2Grwtul3s=) 2025-06-02 12:54:49.013573 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK9ChqKFtZpsdR3zwxlXfOgYDOobNf9a0zaogGIO4dBh) 2025-06-02 12:54:49.014132 | orchestrator | 2025-06-02 12:54:49.014165 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:49.015274 | orchestrator | Monday 02 June 2025 12:54:49 +0000 (0:00:01.016) 0:00:21.700 *********** 2025-06-02 12:54:50.027509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNJUG2TXJjmeoemNNhbUb0Lfer3OWh0T/cT5vgWUrKGNWPcGcHYVJufKLHEGWBxBU2XeYdpTGT+42H/i39tSjZw=) 2025-06-02 12:54:50.027619 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoGOiflt5DKtwJVfVV3XStkG/vnT1RSpa9+vE5r14MsN+0WAlJc0irbVlcFI9KIIyAWl3HSVrY+W2n/ejeRKq5aCCSmD14zrteXWlknds+jAPNsFeO7yRsoaPDO2LhNyQIQGPTJhPFhKlTOjTy3VGuIjKEs6bQZFcZHgoOxteRTWxf1G6M588+0ysESlw/PkmvcEHxb7FVSUP0untj5/mKLVpb8aQnxE+uKu+VOVs3xWnBdY1mQlmRSsvEQMuX0z45stO2a7EjeX+tr/EHpF2ZidyddDufGKzSrlNCMBb5V5QW1aw2DQpAG60lG5+4efsMSMHS4GD51N4BYeVFSjKEqOidISTDkgYh0aeXZql9ntWU88yZngSt80Ck+XYswdmH4Ndcu5lg12PBlj5QRE13wbW11QLsz4sHeDsbIjGAzjyLcB4V6QmnsQc39sdywLaGOui1jMjQ0MCcEk+edaugm37ctPDXVxW71YyLDpX56/z9B+3D/QF42V7k6G1goHM=) 2025-06-02 12:54:50.027674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIWdXOtX9iMCNiA56RxdI7+HBjBmik1Rb1C5zUtfM80p) 2025-06-02 12:54:50.028454 | orchestrator | 2025-06-02 12:54:50.029280 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:50.030090 | orchestrator | Monday 02 June 2025 12:54:50 +0000 (0:00:01.014) 0:00:22.714 *********** 2025-06-02 12:54:51.048177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDADw0F61IoY2hDGnx4BWSWYuGR6EuyteeaNfIumrj6i) 2025-06-02 12:54:51.048304 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAH3k4PY9rxFRsDhJOQPGxfLm7kbZSaL+4i21xvpwDGGJVuiDkixxCJ8jtd3svvoYbnWEFE3ROhKG4fKnDpqqSENLYzxaQ/w1OheoxOzaE3XF0U2hnIR858BeS/ZccvRdh/xvyKPMK/1/VKrFkgwih4WwWuK7BQSAl11G7xkZK4e4jWHQMcUaUZJTaefKm9RVVvD1AhexIou8ScZK2FLSIk30McDL543KoBFbvAQWH720dIo+zzKKEAu3QL0cU5AyX9PgO70lqC00LZX0JhwQLGKFM3n2bDCWPVOd0gj7QU6SGJpRp1uVCUnrclxVnaEsOT8EIjQw9siX29B3KKDDlly04QCb9Cge9VVchzATR0wJvg7lpjkTyALxacj9IG42ORLEzyctBvuhs8Bm0C6LMmPm9FpRcPYBIaKPb38q7rFsjFpS+p1JsDN+e5SZBsoT+k0T6IIWvP2EjftSBIc3+hQ+Rt4sDpo0ZM8NCcHN3TxsDnYn9dmnZL3bs3UcstsM=) 2025-06-02 12:54:51.048315 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP4H9QCUgy4NeAHH47Nu/DSio8TlGsAeg4NyGPLsoWZVtFBXuNZl2hcMFTmqaPA3ZKf6gahILnd1OZ9YNLkVi/w=) 2025-06-02 12:54:51.048630 | orchestrator | 2025-06-02 12:54:51.048928 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:51.049387 | orchestrator | Monday 02 June 2025 12:54:51 +0000 (0:00:01.021) 0:00:23.735 *********** 2025-06-02 12:54:52.052521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj0IChiDVbVFx21dQk4+taT+ZR45xaNrqDVlfkCFJ6HrVNmDjHYeEJwrB64GQoGcRASlNeJRqm8Co6ilsVO8NR9GljAbaVHVltuAOWWoBxytPKWR8nHNca/Xj3M1CMNyEdHQAgVGiWPdCmStH2MsiZ7xVt5oznEB3rP8grYOrh7d0FYDQi99UsoEiABFb0Bxl6srRH/uV5kQD1mtcaYZ8hmhYeDwAG4OqZtCTAI+3nYMZQYAi8LjCpkUTK+KClIhpy3ohuUq1fRxpsvefpKKfCAu5CNnn5AtZuRHVNbTB2QZTMAW4xXq7a/GRa6yO2pjeJXSjpaY1nVbrmh8V3nJ89ESGylWwrh0ZlikfY+qZofYfjz1ykIy2XXeXbCaYEBOT3/n8oVlykUp4Yzm+SiujM2QJb2ECDQIzpMZoztKLQ2gICsbG7sJwFluVSe1yTzvyLQiQ4HsIY0YID3rrXFM1T8/zFPLX9FQ+dFxfPgnJsIgIkD8jrQfcft6ZBUxakRCE=) 2025-06-02 12:54:52.053329 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDXrTuWRR+edMP8MOjgHYlivLXprOmHdo4seFJM/uLWAWvQ6Dt667esKY7SNN4DeC8VPl0OjhraNSIow6St1rk4=) 2025-06-02 12:54:52.054559 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOoKLwwsUyM1UckeceKsXb+or1H1hQuGmzCYZrSlLvrR) 2025-06-02 12:54:52.055251 | orchestrator | 2025-06-02 12:54:52.055987 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 12:54:52.056677 | orchestrator | Monday 02 June 2025 12:54:52 +0000 (0:00:01.005) 0:00:24.740 *********** 2025-06-02 12:54:53.050529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDddAqDu42hbPvl3M6MRsPt6OOIGNsrAikRa58tRE14CDQhz7SfDoueaaJEZHmhR48ILyzqTnHCeO59f0uwAeun6Rb6uAjZ+6+JZ+Hkyj71bzL39tQAyVSTIfCjjSTmjEmbOBYZX70qb0dW++NcfUd1llAzOnW1crRmGRLAAydjgwwj6n4zFUbtDvUjxm3FHuHg3WaypfnGHFlqnW1MY+qsW+QXYXcvVFr9YhoTPbbxEX1/aHgN8xcWai+E6UhYcntZbtnqOq2H67oJoD5TKoQUrosqTW79ZjifedoObBhTVxk3rjRBoywHvi/5zvXwjF/IVNNAlKb+wlgYx2OA4gox7IiFTj+Cj/z8+tXdEFY6mDbiCRjffpjKa0fEC4mlTRBJM0eWPAtmTMyOzDoRX/AzUwWlvTfirGVjNtrUiyX1zrR++X7epiBCYS2AyyxkwRIOxAlZZosC7cADBA9K9ju959hU5Z8y3TlAtzESEMkHExq8EdB3intC6vT3LEwJRM=) 2025-06-02 12:54:53.051407 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ50Fyq2A7caPORwhjNDNoX89HWBRFC+JfcdybH/+gv/T51txhxvDv2F7ZhCu7YTpft7hCbCw0zdj6AVsn35Igo=) 2025-06-02 12:54:53.052005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ04OaNR8VdTCPu2UwAx+iDoCutA3Bdi5ctMuodUDQyu) 2025-06-02 12:54:53.053562 | orchestrator | 2025-06-02 12:54:53.054105 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-02 12:54:53.054792 | orchestrator | Monday 02 June 2025 12:54:53 +0000 (0:00:00.996) 0:00:25.737 *********** 2025-06-02 12:54:53.196663 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 12:54:53.198361 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 12:54:53.199907 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 12:54:53.200687 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 12:54:53.202296 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 12:54:53.202575 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 12:54:53.203582 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 12:54:53.204743 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:54:53.205566 | orchestrator | 2025-06-02 12:54:53.206312 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-02 12:54:53.206948 | orchestrator | Monday 02 June 2025 12:54:53 +0000 (0:00:00.149) 0:00:25.886 *********** 2025-06-02 12:54:53.258409 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:54:53.258899 | orchestrator | 2025-06-02 12:54:53.259597 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-02 12:54:53.260754 | orchestrator | Monday 02 June 2025 12:54:53 +0000 (0:00:00.062) 0:00:25.948 *********** 2025-06-02 12:54:53.317388 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:54:53.317967 | orchestrator | 2025-06-02 12:54:53.318867 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-02 12:54:53.319880 | orchestrator | Monday 02 June 2025 12:54:53 +0000 (0:00:00.058) 0:00:26.007 *********** 2025-06-02 12:54:53.940597 | orchestrator | changed: [testbed-manager] 2025-06-02 12:54:53.941740 | orchestrator | 2025-06-02 12:54:53.942985 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:54:53.943555 | orchestrator | 2025-06-02 12:54:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:54:53.943581 | orchestrator | 2025-06-02 12:54:53 | INFO  | Please wait and do not abort execution. 2025-06-02 12:54:53.945294 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 12:54:53.946579 | orchestrator | 2025-06-02 12:54:53.947680 | orchestrator | 2025-06-02 12:54:53.948725 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:54:53.949866 | orchestrator | Monday 02 June 2025 12:54:53 +0000 (0:00:00.622) 0:00:26.629 *********** 2025-06-02 12:54:53.950381 | orchestrator | =============================================================================== 2025-06-02 12:54:53.951498 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.84s 2025-06-02 12:54:53.952128 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2025-06-02 12:54:53.953254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-06-02 12:54:53.953837 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 12:54:53.954476 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 12:54:53.954857 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 12:54:53.955505 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 12:54:53.956030 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 12:54:53.956554 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 12:54:53.957082 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 12:54:53.957501 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 12:54:53.957938 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 12:54:53.958373 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-02 12:54:53.958995 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-06-02 12:54:53.959567 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-02 12:54:53.960004 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-02 12:54:53.960564 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.62s 2025-06-02 12:54:53.961030 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2025-06-02 12:54:53.961508 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-06-02 12:54:53.961968 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.15s 2025-06-02 12:54:54.399367 | orchestrator | + osism apply squid 2025-06-02 12:54:56.006828 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:54:56.006932 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:54:56.006948 | orchestrator | Registering Redlock._release_script 2025-06-02 12:54:56.062815 | orchestrator | 2025-06-02 12:54:56 | INFO  | Task 1397fddc-4c5b-4cb1-9baa-6233447a2bc4 (squid) was prepared for execution. 2025-06-02 12:54:56.062915 | orchestrator | 2025-06-02 12:54:56 | INFO  | It takes a moment until task 1397fddc-4c5b-4cb1-9baa-6233447a2bc4 (squid) has been started and output is visible here. 2025-06-02 12:54:59.609294 | orchestrator | 2025-06-02 12:54:59.610175 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-02 12:54:59.610837 | orchestrator | 2025-06-02 12:54:59.611557 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-02 12:54:59.612273 | orchestrator | Monday 02 June 2025 12:54:59 +0000 (0:00:00.119) 0:00:00.119 *********** 2025-06-02 12:54:59.676196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 12:54:59.676289 | orchestrator | 2025-06-02 12:54:59.676510 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-02 12:54:59.677390 | orchestrator | Monday 02 June 2025 12:54:59 +0000 (0:00:00.069) 0:00:00.189 *********** 2025-06-02 12:55:00.707605 | orchestrator | ok: [testbed-manager] 2025-06-02 12:55:00.708841 | orchestrator | 2025-06-02 12:55:00.709305 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-02 12:55:00.709900 | orchestrator | Monday 02 June 2025 12:55:00 +0000 (0:00:01.030) 0:00:01.219 *********** 2025-06-02 12:55:01.719089 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-02 12:55:01.719594 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-02 12:55:01.720763 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-02 12:55:01.722093 | orchestrator | 2025-06-02 12:55:01.722907 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-02 12:55:01.723425 | orchestrator | Monday 02 June 2025 12:55:01 +0000 (0:00:01.011) 0:00:02.230 *********** 2025-06-02 12:55:02.645513 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-02 12:55:02.646799 | orchestrator | 2025-06-02 12:55:02.647305 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-02 12:55:02.648612 | orchestrator | Monday 02 June 2025 12:55:02 +0000 (0:00:00.926) 0:00:03.157 *********** 2025-06-02 12:55:02.967895 | orchestrator | ok: [testbed-manager] 2025-06-02 12:55:02.970261 | orchestrator | 2025-06-02 12:55:02.970310 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-02 12:55:02.971203 | orchestrator | Monday 02 June 2025 12:55:02 +0000 (0:00:00.321) 0:00:03.478 *********** 2025-06-02 12:55:03.713929 | orchestrator | changed: [testbed-manager] 2025-06-02 12:55:03.714125 | orchestrator | 2025-06-02 12:55:03.714147 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-02 12:55:03.714964 | orchestrator | Monday 02 June 2025 12:55:03 +0000 (0:00:00.745) 0:00:04.224 *********** 2025-06-02 12:55:34.970582 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-02 12:55:34.970702 | orchestrator | ok: [testbed-manager] 2025-06-02 12:55:34.970718 | orchestrator | 2025-06-02 12:55:34.970731 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-02 12:55:34.970744 | orchestrator | Monday 02 June 2025 12:55:34 +0000 (0:00:31.250) 0:00:35.475 *********** 2025-06-02 12:55:47.393498 | orchestrator | changed: [testbed-manager] 2025-06-02 12:55:47.393618 | orchestrator | 2025-06-02 12:55:47.393636 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-02 12:55:47.394711 | orchestrator | Monday 02 June 2025 12:55:47 +0000 (0:00:12.425) 0:00:47.900 *********** 2025-06-02 12:56:47.456657 | orchestrator | Pausing for 60 seconds 2025-06-02 12:56:47.456766 | orchestrator | changed: [testbed-manager] 2025-06-02 12:56:47.456781 | orchestrator | 2025-06-02 12:56:47.456793 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-02 12:56:47.456805 | orchestrator | Monday 02 June 2025 12:56:47 +0000 (0:01:00.066) 0:01:47.967 *********** 2025-06-02 12:56:47.504807 | orchestrator | ok: [testbed-manager] 2025-06-02 12:56:47.505001 | orchestrator | 2025-06-02 12:56:47.505065 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-02 12:56:47.505520 | orchestrator | Monday 02 June 2025 12:56:47 +0000 (0:00:00.051) 0:01:48.018 *********** 2025-06-02 12:56:48.062184 | orchestrator | changed: [testbed-manager] 2025-06-02 12:56:48.062359 | orchestrator | 2025-06-02 12:56:48.063152 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:56:48.063428 | orchestrator | 2025-06-02 12:56:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:56:48.063864 | orchestrator | 2025-06-02 12:56:48 | INFO  | Please wait and do not abort execution. 2025-06-02 12:56:48.065196 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:56:48.066140 | orchestrator | 2025-06-02 12:56:48.067386 | orchestrator | 2025-06-02 12:56:48.068420 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:56:48.071315 | orchestrator | Monday 02 June 2025 12:56:48 +0000 (0:00:00.556) 0:01:48.575 *********** 2025-06-02 12:56:48.072118 | orchestrator | =============================================================================== 2025-06-02 12:56:48.072916 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-02 12:56:48.073213 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.25s 2025-06-02 12:56:48.073682 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.43s 2025-06-02 12:56:48.074154 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.03s 2025-06-02 12:56:48.074576 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.01s 2025-06-02 12:56:48.074975 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.93s 2025-06-02 12:56:48.075387 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.75s 2025-06-02 12:56:48.075706 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.56s 2025-06-02 12:56:48.076062 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2025-06-02 12:56:48.076492 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-06-02 12:56:48.076735 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.05s 2025-06-02 12:56:48.510716 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-02 12:56:48.510808 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-02 12:56:48.513652 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-02 12:56:48.558294 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-02 12:56:48.558419 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-02 12:56:50.171211 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:56:50.171309 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:56:50.171323 | orchestrator | Registering Redlock._release_script 2025-06-02 12:56:50.225939 | orchestrator | 2025-06-02 12:56:50 | INFO  | Task 8e001c0b-a710-46b5-8e3d-78fe066e1628 (operator) was prepared for execution. 2025-06-02 12:56:50.226015 | orchestrator | 2025-06-02 12:56:50 | INFO  | It takes a moment until task 8e001c0b-a710-46b5-8e3d-78fe066e1628 (operator) has been started and output is visible here. 2025-06-02 12:56:53.994346 | orchestrator | 2025-06-02 12:56:53.994561 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-02 12:56:53.996271 | orchestrator | 2025-06-02 12:56:53.996296 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 12:56:53.997337 | orchestrator | Monday 02 June 2025 12:56:53 +0000 (0:00:00.125) 0:00:00.125 *********** 2025-06-02 12:56:57.160478 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:57.160585 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:57.161117 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:57.161491 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:57.162977 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:57.163419 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:57.163933 | orchestrator | 2025-06-02 12:56:57.164758 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-02 12:56:57.164986 | orchestrator | Monday 02 June 2025 12:56:57 +0000 (0:00:03.166) 0:00:03.292 *********** 2025-06-02 12:56:57.883310 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:57.883433 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:57.883449 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:57.883461 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:57.883473 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:57.883542 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:57.886948 | orchestrator | 2025-06-02 12:56:57.887251 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-02 12:56:57.887519 | orchestrator | 2025-06-02 12:56:57.887728 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 12:56:57.888063 | orchestrator | Monday 02 June 2025 12:56:57 +0000 (0:00:00.718) 0:00:04.011 *********** 2025-06-02 12:56:57.933069 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:57.948797 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:57.967982 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:58.007079 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:58.007208 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:58.007222 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:58.007234 | orchestrator | 2025-06-02 12:56:58.007246 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 12:56:58.007258 | orchestrator | Monday 02 June 2025 12:56:57 +0000 (0:00:00.125) 0:00:04.136 *********** 2025-06-02 12:56:58.061235 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:56:58.080669 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:56:58.100876 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:56:58.142250 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:56:58.142309 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:56:58.151012 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:56:58.151055 | orchestrator | 2025-06-02 12:56:58.151068 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 12:56:58.151080 | orchestrator | Monday 02 June 2025 12:56:58 +0000 (0:00:00.139) 0:00:04.276 *********** 2025-06-02 12:56:58.697061 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:58.697603 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:58.698113 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:58.699208 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:58.699543 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:58.700431 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:58.700857 | orchestrator | 2025-06-02 12:56:58.701764 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 12:56:58.702117 | orchestrator | Monday 02 June 2025 12:56:58 +0000 (0:00:00.553) 0:00:04.829 *********** 2025-06-02 12:56:59.427725 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:56:59.427860 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:56:59.428711 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:56:59.430254 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:56:59.431858 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:56:59.432591 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:56:59.433785 | orchestrator | 2025-06-02 12:56:59.434290 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 12:56:59.435039 | orchestrator | Monday 02 June 2025 12:56:59 +0000 (0:00:00.729) 0:00:05.559 *********** 2025-06-02 12:57:00.634956 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-02 12:57:00.637421 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-02 12:57:00.637557 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-02 12:57:00.639710 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-02 12:57:00.640120 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-02 12:57:00.642456 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-02 12:57:00.643192 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-02 12:57:00.643611 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-02 12:57:00.644184 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-02 12:57:00.644206 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-02 12:57:00.644763 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-02 12:57:00.645027 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-02 12:57:00.647657 | orchestrator | 2025-06-02 12:57:00.647752 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 12:57:00.649816 | orchestrator | Monday 02 June 2025 12:57:00 +0000 (0:00:01.206) 0:00:06.765 *********** 2025-06-02 12:57:01.829507 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:01.832479 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:01.832926 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:01.833006 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:01.833437 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:01.833789 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:01.834214 | orchestrator | 2025-06-02 12:57:01.834671 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 12:57:01.835030 | orchestrator | Monday 02 June 2025 12:57:01 +0000 (0:00:01.194) 0:00:07.960 *********** 2025-06-02 12:57:02.996395 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-02 12:57:02.996491 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-02 12:57:02.997952 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-02 12:57:03.106765 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:57:03.111875 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:57:03.111924 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:57:03.111938 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:57:03.111950 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:57:03.112014 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 12:57:03.112959 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-02 12:57:03.113356 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-02 12:57:03.114292 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-02 12:57:03.114877 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-02 12:57:03.117099 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-02 12:57:03.117134 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-02 12:57:03.117209 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:57:03.117828 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:57:03.118552 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:57:03.119046 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:57:03.122070 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:57:03.122484 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-02 12:57:03.122899 | orchestrator | 2025-06-02 12:57:03.123317 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 12:57:03.125339 | orchestrator | Monday 02 June 2025 12:57:03 +0000 (0:00:01.278) 0:00:09.238 *********** 2025-06-02 12:57:03.723203 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:03.723810 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:03.726948 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:03.727012 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:03.727024 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:03.727036 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:03.727121 | orchestrator | 2025-06-02 12:57:03.727856 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 12:57:03.728162 | orchestrator | Monday 02 June 2025 12:57:03 +0000 (0:00:00.617) 0:00:09.855 *********** 2025-06-02 12:57:03.783141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:03.803460 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:03.826631 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:03.868065 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:03.868300 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:03.871951 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:03.871979 | orchestrator | 2025-06-02 12:57:03.871992 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 12:57:03.872004 | orchestrator | Monday 02 June 2025 12:57:03 +0000 (0:00:00.144) 0:00:10.000 *********** 2025-06-02 12:57:04.606158 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 12:57:04.606253 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:04.606609 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 12:57:04.608466 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:04.609423 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 12:57:04.610269 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:04.611751 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 12:57:04.612644 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:04.613817 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 12:57:04.615119 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:04.616201 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 12:57:04.616700 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:04.617198 | orchestrator | 2025-06-02 12:57:04.618284 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 12:57:04.619290 | orchestrator | Monday 02 June 2025 12:57:04 +0000 (0:00:00.732) 0:00:10.733 *********** 2025-06-02 12:57:04.664811 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:04.686734 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:04.706971 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:04.735295 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:04.736561 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:04.737673 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:04.738980 | orchestrator | 2025-06-02 12:57:04.739796 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 12:57:04.740581 | orchestrator | Monday 02 June 2025 12:57:04 +0000 (0:00:00.134) 0:00:10.868 *********** 2025-06-02 12:57:04.780124 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:04.802950 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:04.826412 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:04.851273 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:04.892889 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:04.895098 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:04.895136 | orchestrator | 2025-06-02 12:57:04.895450 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 12:57:04.896792 | orchestrator | Monday 02 June 2025 12:57:04 +0000 (0:00:00.155) 0:00:11.023 *********** 2025-06-02 12:57:04.943010 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:04.963830 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:04.986341 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:05.005604 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:05.033812 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:05.034194 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:05.035100 | orchestrator | 2025-06-02 12:57:05.035658 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 12:57:05.036009 | orchestrator | Monday 02 June 2025 12:57:05 +0000 (0:00:00.142) 0:00:11.166 *********** 2025-06-02 12:57:05.684437 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:05.684595 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:05.685065 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:05.688648 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:05.688986 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:05.689748 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:05.690256 | orchestrator | 2025-06-02 12:57:05.690577 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 12:57:05.691137 | orchestrator | Monday 02 June 2025 12:57:05 +0000 (0:00:00.645) 0:00:11.812 *********** 2025-06-02 12:57:05.777944 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:05.799559 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:05.910796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:05.911019 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:05.911899 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:05.912647 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:05.913233 | orchestrator | 2025-06-02 12:57:05.914013 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:57:05.914980 | orchestrator | 2025-06-02 12:57:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:57:05.915034 | orchestrator | 2025-06-02 12:57:05 | INFO  | Please wait and do not abort execution. 2025-06-02 12:57:05.915535 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:57:05.916515 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:57:05.916836 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:57:05.917688 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:57:05.918393 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:57:05.918844 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 12:57:05.919835 | orchestrator | 2025-06-02 12:57:05.919930 | orchestrator | 2025-06-02 12:57:05.922171 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:57:05.922207 | orchestrator | Monday 02 June 2025 12:57:05 +0000 (0:00:00.231) 0:00:12.043 *********** 2025-06-02 12:57:05.923317 | orchestrator | =============================================================================== 2025-06-02 12:57:05.923650 | orchestrator | Gathering Facts --------------------------------------------------------- 3.17s 2025-06-02 12:57:05.924592 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-06-02 12:57:05.925815 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-06-02 12:57:05.926467 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2025-06-02 12:57:05.928200 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-06-02 12:57:05.929282 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.73s 2025-06-02 12:57:05.929575 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2025-06-02 12:57:05.930483 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-06-02 12:57:05.931553 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.62s 2025-06-02 12:57:05.932221 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.55s 2025-06-02 12:57:05.932781 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-06-02 12:57:05.933191 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-06-02 12:57:05.933697 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.14s 2025-06-02 12:57:05.934103 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-06-02 12:57:05.934494 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-06-02 12:57:05.934944 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.13s 2025-06-02 12:57:05.935444 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2025-06-02 12:57:06.390268 | orchestrator | + osism apply --environment custom facts 2025-06-02 12:57:07.975762 | orchestrator | 2025-06-02 12:57:07 | INFO  | Trying to run play facts in environment custom 2025-06-02 12:57:07.980173 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:57:07.980236 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:57:07.980534 | orchestrator | Registering Redlock._release_script 2025-06-02 12:57:08.041428 | orchestrator | 2025-06-02 12:57:08 | INFO  | Task 2f93bc92-774e-4066-8ff3-740e0b2d2864 (facts) was prepared for execution. 2025-06-02 12:57:08.041506 | orchestrator | 2025-06-02 12:57:08 | INFO  | It takes a moment until task 2f93bc92-774e-4066-8ff3-740e0b2d2864 (facts) has been started and output is visible here. 2025-06-02 12:57:11.840593 | orchestrator | 2025-06-02 12:57:11.840798 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-02 12:57:11.840864 | orchestrator | 2025-06-02 12:57:11.841268 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 12:57:11.841659 | orchestrator | Monday 02 June 2025 12:57:11 +0000 (0:00:00.078) 0:00:00.078 *********** 2025-06-02 12:57:13.295046 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:13.295240 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:13.295724 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:13.296518 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:13.300318 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:13.300362 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:13.300988 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:13.301412 | orchestrator | 2025-06-02 12:57:13.301538 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-02 12:57:13.301932 | orchestrator | Monday 02 June 2025 12:57:13 +0000 (0:00:01.455) 0:00:01.534 *********** 2025-06-02 12:57:14.404968 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:14.405149 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:14.406121 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:14.406993 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:14.407857 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:57:14.408562 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:57:14.409438 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:57:14.409803 | orchestrator | 2025-06-02 12:57:14.410496 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-02 12:57:14.411103 | orchestrator | 2025-06-02 12:57:14.415151 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 12:57:14.415194 | orchestrator | Monday 02 June 2025 12:57:14 +0000 (0:00:01.105) 0:00:02.639 *********** 2025-06-02 12:57:14.498097 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:14.498829 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:14.499203 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:14.499654 | orchestrator | 2025-06-02 12:57:14.500156 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 12:57:14.500572 | orchestrator | Monday 02 June 2025 12:57:14 +0000 (0:00:00.098) 0:00:02.738 *********** 2025-06-02 12:57:14.666690 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:14.667545 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:14.667582 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:14.668025 | orchestrator | 2025-06-02 12:57:14.668048 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 12:57:14.668516 | orchestrator | Monday 02 June 2025 12:57:14 +0000 (0:00:00.166) 0:00:02.905 *********** 2025-06-02 12:57:14.841312 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:14.841486 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:14.841917 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:14.842328 | orchestrator | 2025-06-02 12:57:14.845121 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 12:57:14.846735 | orchestrator | Monday 02 June 2025 12:57:14 +0000 (0:00:00.177) 0:00:03.082 *********** 2025-06-02 12:57:14.945497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 12:57:14.947267 | orchestrator | 2025-06-02 12:57:14.948124 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 12:57:14.948837 | orchestrator | Monday 02 June 2025 12:57:14 +0000 (0:00:00.103) 0:00:03.185 *********** 2025-06-02 12:57:15.362511 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:15.363450 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:15.364767 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:15.365496 | orchestrator | 2025-06-02 12:57:15.366169 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 12:57:15.366822 | orchestrator | Monday 02 June 2025 12:57:15 +0000 (0:00:00.416) 0:00:03.602 *********** 2025-06-02 12:57:15.454922 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:15.455290 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:15.456160 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:15.456898 | orchestrator | 2025-06-02 12:57:15.457519 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 12:57:15.457954 | orchestrator | Monday 02 June 2025 12:57:15 +0000 (0:00:00.092) 0:00:03.694 *********** 2025-06-02 12:57:16.422365 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:16.423149 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:16.423422 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:16.423737 | orchestrator | 2025-06-02 12:57:16.424124 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 12:57:16.424441 | orchestrator | Monday 02 June 2025 12:57:16 +0000 (0:00:00.966) 0:00:04.661 *********** 2025-06-02 12:57:16.863703 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:16.863896 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:16.863994 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:16.866271 | orchestrator | 2025-06-02 12:57:16.866415 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 12:57:16.866902 | orchestrator | Monday 02 June 2025 12:57:16 +0000 (0:00:00.440) 0:00:05.101 *********** 2025-06-02 12:57:17.861718 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:17.861823 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:17.863027 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:17.863097 | orchestrator | 2025-06-02 12:57:17.863721 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 12:57:17.864211 | orchestrator | Monday 02 June 2025 12:57:17 +0000 (0:00:00.997) 0:00:06.098 *********** 2025-06-02 12:57:31.025982 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:31.026251 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:31.026272 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:31.026284 | orchestrator | 2025-06-02 12:57:31.026296 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-02 12:57:31.026309 | orchestrator | Monday 02 June 2025 12:57:31 +0000 (0:00:13.160) 0:00:19.259 *********** 2025-06-02 12:57:31.113462 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:31.113675 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:31.114349 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:31.114805 | orchestrator | 2025-06-02 12:57:31.115869 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-02 12:57:31.117269 | orchestrator | Monday 02 June 2025 12:57:31 +0000 (0:00:00.094) 0:00:19.353 *********** 2025-06-02 12:57:38.046279 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:57:38.047272 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:57:38.048157 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:57:38.049304 | orchestrator | 2025-06-02 12:57:38.050166 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 12:57:38.051091 | orchestrator | Monday 02 June 2025 12:57:38 +0000 (0:00:06.927) 0:00:26.280 *********** 2025-06-02 12:57:38.465453 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:38.465816 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:38.466793 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:38.466828 | orchestrator | 2025-06-02 12:57:38.467435 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 12:57:38.468154 | orchestrator | Monday 02 June 2025 12:57:38 +0000 (0:00:00.422) 0:00:26.702 *********** 2025-06-02 12:57:41.893466 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-02 12:57:41.893637 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-02 12:57:41.893728 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-02 12:57:41.895951 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-02 12:57:41.896417 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-02 12:57:41.897949 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-02 12:57:41.898716 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-02 12:57:41.899519 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-02 12:57:41.900142 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-02 12:57:41.901445 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-02 12:57:41.902715 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-02 12:57:41.903318 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-02 12:57:41.904061 | orchestrator | 2025-06-02 12:57:41.904734 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 12:57:41.905330 | orchestrator | Monday 02 June 2025 12:57:41 +0000 (0:00:03.428) 0:00:30.131 *********** 2025-06-02 12:57:43.096486 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:43.096824 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:43.100998 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:43.101239 | orchestrator | 2025-06-02 12:57:43.102472 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 12:57:43.103044 | orchestrator | 2025-06-02 12:57:43.104070 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 12:57:43.104653 | orchestrator | Monday 02 June 2025 12:57:43 +0000 (0:00:01.202) 0:00:31.334 *********** 2025-06-02 12:57:46.862262 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:46.862539 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:46.862689 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:46.863416 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:46.863875 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:46.864533 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:46.865105 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:46.865876 | orchestrator | 2025-06-02 12:57:46.866302 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 12:57:46.866866 | orchestrator | 2025-06-02 12:57:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 12:57:46.866891 | orchestrator | 2025-06-02 12:57:46 | INFO  | Please wait and do not abort execution. 2025-06-02 12:57:46.867500 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:57:46.867599 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:57:46.868176 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:57:46.868716 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 12:57:46.868739 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 12:57:46.868941 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 12:57:46.869356 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 12:57:46.869650 | orchestrator | 2025-06-02 12:57:46.869931 | orchestrator | 2025-06-02 12:57:46.870315 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 12:57:46.870544 | orchestrator | Monday 02 June 2025 12:57:46 +0000 (0:00:03.766) 0:00:35.100 *********** 2025-06-02 12:57:46.870815 | orchestrator | =============================================================================== 2025-06-02 12:57:46.871144 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.16s 2025-06-02 12:57:46.871408 | orchestrator | Install required packages (Debian) -------------------------------------- 6.93s 2025-06-02 12:57:46.871704 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2025-06-02 12:57:46.871951 | orchestrator | Copy fact files --------------------------------------------------------- 3.43s 2025-06-02 12:57:46.872152 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2025-06-02 12:57:46.872376 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.20s 2025-06-02 12:57:46.872655 | orchestrator | Copy fact file ---------------------------------------------------------- 1.11s 2025-06-02 12:57:46.872913 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.00s 2025-06-02 12:57:46.873129 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2025-06-02 12:57:46.873365 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2025-06-02 12:57:46.873638 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-06-02 12:57:46.873852 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2025-06-02 12:57:46.874428 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-06-02 12:57:46.874620 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2025-06-02 12:57:46.874715 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.10s 2025-06-02 12:57:46.874923 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-06-02 12:57:46.875618 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-06-02 12:57:46.875641 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-06-02 12:57:47.298969 | orchestrator | + osism apply bootstrap 2025-06-02 12:57:48.910879 | orchestrator | Registering Redlock._acquired_script 2025-06-02 12:57:48.910977 | orchestrator | Registering Redlock._extend_script 2025-06-02 12:57:48.910992 | orchestrator | Registering Redlock._release_script 2025-06-02 12:57:48.984720 | orchestrator | 2025-06-02 12:57:48 | INFO  | Task 83d4000f-73ca-4188-8892-37272ae84075 (bootstrap) was prepared for execution. 2025-06-02 12:57:48.984809 | orchestrator | 2025-06-02 12:57:48 | INFO  | It takes a moment until task 83d4000f-73ca-4188-8892-37272ae84075 (bootstrap) has been started and output is visible here. 2025-06-02 12:57:52.981679 | orchestrator | 2025-06-02 12:57:52.985065 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-02 12:57:52.985370 | orchestrator | 2025-06-02 12:57:52.985396 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-02 12:57:52.985409 | orchestrator | Monday 02 June 2025 12:57:52 +0000 (0:00:00.160) 0:00:00.160 *********** 2025-06-02 12:57:53.054892 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:53.081350 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:53.103965 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:53.142232 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:53.226887 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:53.227810 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:53.228240 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:53.228934 | orchestrator | 2025-06-02 12:57:53.229439 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 12:57:53.230112 | orchestrator | 2025-06-02 12:57:53.230501 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 12:57:53.231275 | orchestrator | Monday 02 June 2025 12:57:53 +0000 (0:00:00.248) 0:00:00.409 *********** 2025-06-02 12:57:56.799463 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:56.799659 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:56.800576 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:56.801471 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:56.802335 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:56.803036 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:56.803736 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:56.804341 | orchestrator | 2025-06-02 12:57:56.804772 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-02 12:57:56.805252 | orchestrator | 2025-06-02 12:57:56.805966 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 12:57:56.806493 | orchestrator | Monday 02 June 2025 12:57:56 +0000 (0:00:03.571) 0:00:03.980 *********** 2025-06-02 12:57:56.865454 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 12:57:56.911476 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 12:57:56.913945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-02 12:57:56.913998 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 12:57:56.914110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 12:57:56.914150 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 12:57:56.914163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-02 12:57:56.914174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 12:57:56.943144 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-02 12:57:56.943228 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 12:57:56.943241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 12:57:56.994281 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 12:57:56.994631 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-02 12:57:56.994819 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-02 12:57:56.995101 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 12:57:56.995451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 12:57:56.995901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-02 12:57:56.996189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-02 12:57:56.996388 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 12:57:57.251082 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-02 12:57:57.251309 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:57:57.252250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-02 12:57:57.256659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 12:57:57.257783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-02 12:57:57.259227 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 12:57:57.259670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 12:57:57.261031 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 12:57:57.262430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 12:57:57.263193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 12:57:57.264066 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:57:57.264670 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:57:57.265515 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 12:57:57.266177 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-02 12:57:57.267254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 12:57:57.267676 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 12:57:57.268243 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-02 12:57:57.269257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 12:57:57.269349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-02 12:57:57.270171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 12:57:57.270637 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:57:57.271225 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-02 12:57:57.271733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 12:57:57.272225 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-02 12:57:57.272636 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-02 12:57:57.273139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 12:57:57.273611 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:57:57.274106 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-02 12:57:57.274580 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 12:57:57.274914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-02 12:57:57.275410 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 12:57:57.275837 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 12:57:57.276372 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 12:57:57.276765 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:57:57.277302 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 12:57:57.277519 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 12:57:57.279089 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:57:57.279566 | orchestrator | 2025-06-02 12:57:57.279974 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-02 12:57:57.280468 | orchestrator | 2025-06-02 12:57:57.280938 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-02 12:57:57.282826 | orchestrator | Monday 02 June 2025 12:57:57 +0000 (0:00:00.451) 0:00:04.432 *********** 2025-06-02 12:57:58.476550 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:58.476719 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:58.477902 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:58.478448 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:58.479535 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:58.480569 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:58.481460 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:58.482158 | orchestrator | 2025-06-02 12:57:58.482853 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-02 12:57:58.483468 | orchestrator | Monday 02 June 2025 12:57:58 +0000 (0:00:01.225) 0:00:05.658 *********** 2025-06-02 12:57:59.762912 | orchestrator | ok: [testbed-manager] 2025-06-02 12:57:59.766255 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:57:59.766293 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:57:59.766312 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:57:59.766804 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:57:59.767437 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:57:59.767929 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:57:59.769101 | orchestrator | 2025-06-02 12:57:59.770169 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-02 12:57:59.770568 | orchestrator | Monday 02 June 2025 12:57:59 +0000 (0:00:01.284) 0:00:06.942 *********** 2025-06-02 12:58:00.010139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:00.010607 | orchestrator | 2025-06-02 12:58:00.011304 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-02 12:58:00.014767 | orchestrator | Monday 02 June 2025 12:58:00 +0000 (0:00:00.248) 0:00:07.191 *********** 2025-06-02 12:58:01.956168 | orchestrator | changed: [testbed-manager] 2025-06-02 12:58:01.956274 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:01.956290 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:01.956396 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:01.956454 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:01.956468 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:01.956938 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:01.957713 | orchestrator | 2025-06-02 12:58:01.959304 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-02 12:58:01.959399 | orchestrator | Monday 02 June 2025 12:58:01 +0000 (0:00:01.944) 0:00:09.136 *********** 2025-06-02 12:58:02.041061 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:02.214634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:02.214735 | orchestrator | 2025-06-02 12:58:02.215292 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-02 12:58:02.215745 | orchestrator | Monday 02 June 2025 12:58:02 +0000 (0:00:00.259) 0:00:09.395 *********** 2025-06-02 12:58:03.215227 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:03.216392 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:03.217490 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:03.218414 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:03.219432 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:03.219930 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:03.221123 | orchestrator | 2025-06-02 12:58:03.221742 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-02 12:58:03.222237 | orchestrator | Monday 02 June 2025 12:58:03 +0000 (0:00:00.999) 0:00:10.395 *********** 2025-06-02 12:58:03.286450 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:03.765738 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:03.765916 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:03.766080 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:03.766445 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:03.767182 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:03.767946 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:03.769107 | orchestrator | 2025-06-02 12:58:03.769774 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-02 12:58:03.770571 | orchestrator | Monday 02 June 2025 12:58:03 +0000 (0:00:00.551) 0:00:10.947 *********** 2025-06-02 12:58:03.872388 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:58:03.902228 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:58:03.920694 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:58:04.189349 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:58:04.189448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:58:04.189526 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:58:04.190394 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:04.190464 | orchestrator | 2025-06-02 12:58:04.194312 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 12:58:04.194800 | orchestrator | Monday 02 June 2025 12:58:04 +0000 (0:00:00.422) 0:00:11.370 *********** 2025-06-02 12:58:04.257072 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:04.294341 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:58:04.318222 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:58:04.345585 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:58:04.391613 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:58:04.391919 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:58:04.392831 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:58:04.393933 | orchestrator | 2025-06-02 12:58:04.394808 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 12:58:04.395843 | orchestrator | Monday 02 June 2025 12:58:04 +0000 (0:00:00.203) 0:00:11.573 *********** 2025-06-02 12:58:04.652826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:04.653132 | orchestrator | 2025-06-02 12:58:04.654386 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 12:58:04.657862 | orchestrator | Monday 02 June 2025 12:58:04 +0000 (0:00:00.260) 0:00:11.834 *********** 2025-06-02 12:58:04.934734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:04.934822 | orchestrator | 2025-06-02 12:58:04.935694 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 12:58:04.936378 | orchestrator | Monday 02 June 2025 12:58:04 +0000 (0:00:00.277) 0:00:12.111 *********** 2025-06-02 12:58:06.270872 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:06.270985 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:06.271732 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:06.273349 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:06.274339 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:06.275396 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:06.276439 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:06.277417 | orchestrator | 2025-06-02 12:58:06.278791 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 12:58:06.279918 | orchestrator | Monday 02 June 2025 12:58:06 +0000 (0:00:01.337) 0:00:13.449 *********** 2025-06-02 12:58:06.337785 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:06.357985 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:58:06.380203 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:58:06.401296 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:58:06.455661 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:58:06.456392 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:58:06.457606 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:58:06.458614 | orchestrator | 2025-06-02 12:58:06.461145 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 12:58:06.461251 | orchestrator | Monday 02 June 2025 12:58:06 +0000 (0:00:00.188) 0:00:13.637 *********** 2025-06-02 12:58:06.973077 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:06.974401 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:06.974825 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:06.975350 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:06.975811 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:06.976509 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:06.977027 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:06.977738 | orchestrator | 2025-06-02 12:58:06.978334 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 12:58:06.978642 | orchestrator | Monday 02 June 2025 12:58:06 +0000 (0:00:00.514) 0:00:14.152 *********** 2025-06-02 12:58:07.050407 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:07.076301 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:58:07.101976 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:58:07.124185 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:58:07.197193 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:58:07.198114 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:58:07.199734 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:58:07.200824 | orchestrator | 2025-06-02 12:58:07.201859 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 12:58:07.202449 | orchestrator | Monday 02 June 2025 12:58:07 +0000 (0:00:00.226) 0:00:14.378 *********** 2025-06-02 12:58:07.806150 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:07.807543 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:07.808486 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:07.809352 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:07.810555 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:07.811787 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:07.813339 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:07.813920 | orchestrator | 2025-06-02 12:58:07.814466 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 12:58:07.815388 | orchestrator | Monday 02 June 2025 12:58:07 +0000 (0:00:00.607) 0:00:14.986 *********** 2025-06-02 12:58:08.862672 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:08.863728 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:08.865285 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:08.865617 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:08.866844 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:08.867338 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:08.868404 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:08.868606 | orchestrator | 2025-06-02 12:58:08.869315 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 12:58:08.869720 | orchestrator | Monday 02 June 2025 12:58:08 +0000 (0:00:01.056) 0:00:16.042 *********** 2025-06-02 12:58:10.014860 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:10.015254 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:10.016372 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:10.017377 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:10.018159 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:10.019093 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:10.019514 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:10.020322 | orchestrator | 2025-06-02 12:58:10.021163 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 12:58:10.021690 | orchestrator | Monday 02 June 2025 12:58:10 +0000 (0:00:01.152) 0:00:17.194 *********** 2025-06-02 12:58:10.371582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:10.372466 | orchestrator | 2025-06-02 12:58:10.373297 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 12:58:10.374270 | orchestrator | Monday 02 June 2025 12:58:10 +0000 (0:00:00.358) 0:00:17.552 *********** 2025-06-02 12:58:10.446480 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:11.875360 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:11.875910 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:11.877628 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:11.880177 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:11.880225 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:11.880238 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:11.881170 | orchestrator | 2025-06-02 12:58:11.881939 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 12:58:11.882587 | orchestrator | Monday 02 June 2025 12:58:11 +0000 (0:00:01.501) 0:00:19.053 *********** 2025-06-02 12:58:11.976402 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:12.002314 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:12.025809 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:12.051128 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:12.118565 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:12.119290 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:12.119646 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:12.120468 | orchestrator | 2025-06-02 12:58:12.120696 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 12:58:12.121191 | orchestrator | Monday 02 June 2025 12:58:12 +0000 (0:00:00.246) 0:00:19.300 *********** 2025-06-02 12:58:12.206463 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:12.233036 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:12.254657 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:12.282113 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:12.349260 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:12.350059 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:12.351169 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:12.351731 | orchestrator | 2025-06-02 12:58:12.352302 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 12:58:12.352950 | orchestrator | Monday 02 June 2025 12:58:12 +0000 (0:00:00.230) 0:00:19.530 *********** 2025-06-02 12:58:12.428418 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:12.456494 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:12.482499 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:12.506229 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:12.579678 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:12.579866 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:12.580479 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:12.581186 | orchestrator | 2025-06-02 12:58:12.582378 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 12:58:12.583418 | orchestrator | Monday 02 June 2025 12:58:12 +0000 (0:00:00.230) 0:00:19.761 *********** 2025-06-02 12:58:12.860244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:12.861062 | orchestrator | 2025-06-02 12:58:12.861704 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 12:58:12.862538 | orchestrator | Monday 02 June 2025 12:58:12 +0000 (0:00:00.280) 0:00:20.041 *********** 2025-06-02 12:58:13.482480 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:13.484133 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:13.484180 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:13.485595 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:13.486944 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:13.487655 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:13.489131 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:13.489728 | orchestrator | 2025-06-02 12:58:13.490567 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 12:58:13.491236 | orchestrator | Monday 02 June 2025 12:58:13 +0000 (0:00:00.620) 0:00:20.661 *********** 2025-06-02 12:58:13.552840 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:13.587955 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:58:13.617161 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:58:13.637778 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:58:13.700276 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:58:13.700799 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:58:13.701886 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:58:13.702491 | orchestrator | 2025-06-02 12:58:13.703911 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 12:58:13.704649 | orchestrator | Monday 02 June 2025 12:58:13 +0000 (0:00:00.219) 0:00:20.881 *********** 2025-06-02 12:58:14.770285 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:14.771654 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:14.771946 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:14.773023 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:14.773785 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:14.774725 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:14.775177 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:14.775825 | orchestrator | 2025-06-02 12:58:14.776512 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 12:58:14.781826 | orchestrator | Monday 02 June 2025 12:58:14 +0000 (0:00:01.067) 0:00:21.949 *********** 2025-06-02 12:58:15.325727 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:15.326669 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:15.329843 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:15.330397 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:15.331326 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:15.332020 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:15.332924 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:15.333908 | orchestrator | 2025-06-02 12:58:15.334844 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 12:58:15.335604 | orchestrator | Monday 02 June 2025 12:58:15 +0000 (0:00:00.558) 0:00:22.507 *********** 2025-06-02 12:58:16.497692 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:16.497938 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:16.500625 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:16.501884 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:16.503155 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:16.504354 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:16.505033 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:16.505661 | orchestrator | 2025-06-02 12:58:16.507184 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 12:58:16.507632 | orchestrator | Monday 02 June 2025 12:58:16 +0000 (0:00:01.170) 0:00:23.677 *********** 2025-06-02 12:58:30.833261 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:30.833397 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:30.833413 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:30.833425 | orchestrator | changed: [testbed-manager] 2025-06-02 12:58:30.833646 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:30.833712 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:30.835625 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:30.837173 | orchestrator | 2025-06-02 12:58:30.837557 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-02 12:58:30.838551 | orchestrator | Monday 02 June 2025 12:58:30 +0000 (0:00:14.331) 0:00:38.009 *********** 2025-06-02 12:58:30.907137 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:30.927150 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:30.954167 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:30.975633 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:31.027728 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:31.028269 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:31.031742 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:31.031773 | orchestrator | 2025-06-02 12:58:31.031788 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-02 12:58:31.031802 | orchestrator | Monday 02 June 2025 12:58:31 +0000 (0:00:00.200) 0:00:38.209 *********** 2025-06-02 12:58:31.100299 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:31.125387 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:31.150126 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:31.177681 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:31.235089 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:31.235947 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:31.237168 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:31.237803 | orchestrator | 2025-06-02 12:58:31.238490 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-02 12:58:31.239222 | orchestrator | Monday 02 June 2025 12:58:31 +0000 (0:00:00.207) 0:00:38.416 *********** 2025-06-02 12:58:31.311261 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:31.335326 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:31.359548 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:31.382737 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:31.447129 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:31.448987 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:31.449123 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:31.449995 | orchestrator | 2025-06-02 12:58:31.450147 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-02 12:58:31.450242 | orchestrator | Monday 02 June 2025 12:58:31 +0000 (0:00:00.211) 0:00:38.628 *********** 2025-06-02 12:58:31.716527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:31.716707 | orchestrator | 2025-06-02 12:58:31.717151 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-02 12:58:31.720946 | orchestrator | Monday 02 June 2025 12:58:31 +0000 (0:00:00.268) 0:00:38.896 *********** 2025-06-02 12:58:33.269239 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:33.269349 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:33.269682 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:33.271425 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:33.272709 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:33.273549 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:33.274380 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:33.275195 | orchestrator | 2025-06-02 12:58:33.276017 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-02 12:58:33.276792 | orchestrator | Monday 02 June 2025 12:58:33 +0000 (0:00:01.549) 0:00:40.446 *********** 2025-06-02 12:58:34.393945 | orchestrator | changed: [testbed-manager] 2025-06-02 12:58:34.396942 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:34.397820 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:34.398290 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:34.399215 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:34.401140 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:34.401543 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:34.404306 | orchestrator | 2025-06-02 12:58:34.405060 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-02 12:58:34.405734 | orchestrator | Monday 02 June 2025 12:58:34 +0000 (0:00:01.126) 0:00:41.573 *********** 2025-06-02 12:58:35.178814 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:35.180773 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:35.180892 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:35.181827 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:35.183013 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:35.183628 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:35.184754 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:35.185211 | orchestrator | 2025-06-02 12:58:35.185975 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-02 12:58:35.186483 | orchestrator | Monday 02 June 2025 12:58:35 +0000 (0:00:00.786) 0:00:42.359 *********** 2025-06-02 12:58:35.452526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:35.453144 | orchestrator | 2025-06-02 12:58:35.454347 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-02 12:58:35.455459 | orchestrator | Monday 02 June 2025 12:58:35 +0000 (0:00:00.272) 0:00:42.632 *********** 2025-06-02 12:58:36.480458 | orchestrator | changed: [testbed-manager] 2025-06-02 12:58:36.482109 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:36.482193 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:36.483453 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:36.485496 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:36.486450 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:36.488304 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:36.488857 | orchestrator | 2025-06-02 12:58:36.491288 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-02 12:58:36.492159 | orchestrator | Monday 02 June 2025 12:58:36 +0000 (0:00:01.027) 0:00:43.659 *********** 2025-06-02 12:58:36.576895 | orchestrator | skipping: [testbed-manager] 2025-06-02 12:58:36.608646 | orchestrator | skipping: [testbed-node-3] 2025-06-02 12:58:36.639482 | orchestrator | skipping: [testbed-node-4] 2025-06-02 12:58:36.768096 | orchestrator | skipping: [testbed-node-5] 2025-06-02 12:58:36.768579 | orchestrator | skipping: [testbed-node-0] 2025-06-02 12:58:36.771038 | orchestrator | skipping: [testbed-node-1] 2025-06-02 12:58:36.771503 | orchestrator | skipping: [testbed-node-2] 2025-06-02 12:58:36.772031 | orchestrator | 2025-06-02 12:58:36.772376 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-02 12:58:36.772578 | orchestrator | Monday 02 June 2025 12:58:36 +0000 (0:00:00.290) 0:00:43.950 *********** 2025-06-02 12:58:47.870265 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:47.870380 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:47.870396 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:47.870407 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:47.871599 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:47.871645 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:47.873883 | orchestrator | changed: [testbed-manager] 2025-06-02 12:58:47.874004 | orchestrator | 2025-06-02 12:58:47.874827 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-02 12:58:47.875408 | orchestrator | Monday 02 June 2025 12:58:47 +0000 (0:00:11.097) 0:00:55.048 *********** 2025-06-02 12:58:48.662735 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:48.665117 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:48.665150 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:48.665238 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:48.666551 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:48.667688 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:48.668484 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:48.669531 | orchestrator | 2025-06-02 12:58:48.670559 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-02 12:58:48.671586 | orchestrator | Monday 02 June 2025 12:58:48 +0000 (0:00:00.794) 0:00:55.842 *********** 2025-06-02 12:58:49.562390 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:49.562529 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:49.565002 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:49.565048 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:49.565269 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:49.566566 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:49.567246 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:49.568143 | orchestrator | 2025-06-02 12:58:49.568851 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-02 12:58:49.569442 | orchestrator | Monday 02 June 2025 12:58:49 +0000 (0:00:00.899) 0:00:56.741 *********** 2025-06-02 12:58:49.644577 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:49.668685 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:49.697331 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:49.721786 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:49.782284 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:49.784962 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:49.784997 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:49.785010 | orchestrator | 2025-06-02 12:58:49.785897 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-02 12:58:49.786900 | orchestrator | Monday 02 June 2025 12:58:49 +0000 (0:00:00.217) 0:00:56.958 *********** 2025-06-02 12:58:49.861609 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:49.888995 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:49.913689 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:49.935717 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:50.006625 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:50.008468 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:50.009868 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:50.011142 | orchestrator | 2025-06-02 12:58:50.012067 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-02 12:58:50.012891 | orchestrator | Monday 02 June 2025 12:58:50 +0000 (0:00:00.229) 0:00:57.188 *********** 2025-06-02 12:58:50.260271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 12:58:50.262149 | orchestrator | 2025-06-02 12:58:50.262777 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-02 12:58:50.263576 | orchestrator | Monday 02 June 2025 12:58:50 +0000 (0:00:00.252) 0:00:57.441 *********** 2025-06-02 12:58:51.882081 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:51.883829 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:51.884593 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:51.884647 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:51.886252 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:51.886856 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:51.888264 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:51.888764 | orchestrator | 2025-06-02 12:58:51.889761 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-02 12:58:51.890614 | orchestrator | Monday 02 June 2025 12:58:51 +0000 (0:00:01.620) 0:00:59.061 *********** 2025-06-02 12:58:52.509782 | orchestrator | changed: [testbed-manager] 2025-06-02 12:58:52.510173 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:52.511743 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:52.512809 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:52.513687 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:52.514653 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:52.515740 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:52.516317 | orchestrator | 2025-06-02 12:58:52.516924 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-02 12:58:52.517679 | orchestrator | Monday 02 June 2025 12:58:52 +0000 (0:00:00.627) 0:00:59.688 *********** 2025-06-02 12:58:52.581803 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:52.605977 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:52.641406 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:52.667116 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:52.729010 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:52.729634 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:52.731007 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:52.731503 | orchestrator | 2025-06-02 12:58:52.732380 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-02 12:58:52.732892 | orchestrator | Monday 02 June 2025 12:58:52 +0000 (0:00:00.221) 0:00:59.910 *********** 2025-06-02 12:58:54.063835 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:54.064132 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:54.065614 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:54.066153 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:54.067536 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:54.069279 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:54.070059 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:54.071157 | orchestrator | 2025-06-02 12:58:54.072153 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-02 12:58:54.072581 | orchestrator | Monday 02 June 2025 12:58:54 +0000 (0:00:01.332) 0:01:01.242 *********** 2025-06-02 12:58:55.678741 | orchestrator | changed: [testbed-manager] 2025-06-02 12:58:55.679013 | orchestrator | changed: [testbed-node-4] 2025-06-02 12:58:55.679888 | orchestrator | changed: [testbed-node-3] 2025-06-02 12:58:55.680646 | orchestrator | changed: [testbed-node-5] 2025-06-02 12:58:55.681154 | orchestrator | changed: [testbed-node-2] 2025-06-02 12:58:55.682198 | orchestrator | changed: [testbed-node-0] 2025-06-02 12:58:55.682488 | orchestrator | changed: [testbed-node-1] 2025-06-02 12:58:55.683115 | orchestrator | 2025-06-02 12:58:55.684212 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-02 12:58:55.684351 | orchestrator | Monday 02 June 2025 12:58:55 +0000 (0:00:01.615) 0:01:02.858 *********** 2025-06-02 12:58:58.045446 | orchestrator | ok: [testbed-manager] 2025-06-02 12:58:58.046302 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:58:58.046343 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:58:58.047003 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:58:58.047330 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:58:58.047795 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:58:58.048323 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:58:58.048891 | orchestrator | 2025-06-02 12:58:58.049762 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-02 12:58:58.050186 | orchestrator | Monday 02 June 2025 12:58:58 +0000 (0:00:02.366) 0:01:05.224 *********** 2025-06-02 12:59:35.061360 | orchestrator | ok: [testbed-manager] 2025-06-02 12:59:35.061516 | orchestrator | ok: [testbed-node-3] 2025-06-02 12:59:35.061529 | orchestrator | ok: [testbed-node-4] 2025-06-02 12:59:35.061535 | orchestrator | ok: [testbed-node-0] 2025-06-02 12:59:35.061541 | orchestrator | ok: [testbed-node-1] 2025-06-02 12:59:35.061546 | orchestrator | ok: [testbed-node-2] 2025-06-02 12:59:35.061855 | orchestrator | ok: [testbed-node-5] 2025-06-02 12:59:35.061869 | orchestrator | 2025-06-02 12:59:35.061914 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-02 12:59:35.061924 | orchestrator | Monday 02 June 2025 12:59:35 +0000 (0:00:36.988) 0:01:42.213 *********** 2025-06-02 13:00:48.814553 | orchestrator | changed: [testbed-manager] 2025-06-02 13:00:48.814673 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:00:48.815359 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:00:48.816755 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:00:48.817720 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:00:48.818516 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:00:48.820036 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:00:48.820439 | orchestrator | 2025-06-02 13:00:48.821287 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-02 13:00:48.821809 | orchestrator | Monday 02 June 2025 13:00:48 +0000 (0:01:13.777) 0:02:55.990 *********** 2025-06-02 13:00:50.326165 | orchestrator | ok: [testbed-manager] 2025-06-02 13:00:50.326263 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:00:50.326275 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:00:50.326284 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:00:50.326294 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:00:50.326359 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:00:50.326588 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:00:50.326973 | orchestrator | 2025-06-02 13:00:50.327406 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-02 13:00:50.327756 | orchestrator | Monday 02 June 2025 13:00:50 +0000 (0:00:01.510) 0:02:57.501 *********** 2025-06-02 13:01:01.128080 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:01.129641 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:01.129680 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:01.129694 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:01.131292 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:01.131972 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:01.132629 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:01.133452 | orchestrator | 2025-06-02 13:01:01.134098 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-02 13:01:01.135738 | orchestrator | Monday 02 June 2025 13:01:01 +0000 (0:00:10.787) 0:03:08.288 *********** 2025-06-02 13:01:01.484718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-02 13:01:01.485019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-02 13:01:01.485521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-02 13:01:01.485868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-02 13:01:01.486214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-02 13:01:01.487573 | orchestrator | 2025-06-02 13:01:01.487598 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-02 13:01:01.487610 | orchestrator | Monday 02 June 2025 13:01:01 +0000 (0:00:00.378) 0:03:08.667 *********** 2025-06-02 13:01:01.544700 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:01:01.544854 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:01:01.570133 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:01.570221 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:01:01.595616 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:01.634500 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 13:01:01.634707 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:01.657015 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:02.149954 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:01:02.150113 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:01:02.150981 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:01:02.151869 | orchestrator | 2025-06-02 13:01:02.152373 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-02 13:01:02.152755 | orchestrator | Monday 02 June 2025 13:01:02 +0000 (0:00:00.663) 0:03:09.330 *********** 2025-06-02 13:01:02.201419 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:01:02.203103 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:01:02.203604 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:01:02.204065 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:01:02.204593 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:01:02.248649 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:01:02.248911 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:01:02.249534 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:01:02.249952 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:01:02.250559 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:01:02.251348 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:01:02.251833 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:01:02.252140 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:01:02.252695 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:01:02.253297 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:01:02.253735 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:01:02.254269 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:01:02.254656 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:01:02.255227 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:01:02.255668 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:01:02.256175 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:01:02.256490 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:01:02.256923 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:01:02.283857 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:02.283938 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:01:02.284342 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:01:02.284608 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:01:02.284993 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:01:02.285018 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:01:02.286285 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:01:02.286596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:01:02.286620 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 13:01:02.287619 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 13:01:02.331002 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 13:01:02.331352 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:02.331940 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 13:01:02.333096 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 13:01:02.333678 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 13:01:02.334187 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 13:01:02.334540 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 13:01:02.335259 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 13:01:02.335597 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 13:01:02.355289 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:06.897870 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:06.898942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 13:01:06.899110 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 13:01:06.901035 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 13:01:06.902588 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 13:01:06.903583 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 13:01:06.904971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 13:01:06.906357 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 13:01:06.906993 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 13:01:06.908576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 13:01:06.909668 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 13:01:06.910880 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 13:01:06.911715 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 13:01:06.913272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 13:01:06.913566 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 13:01:06.914887 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 13:01:06.916269 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 13:01:06.917041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 13:01:06.918161 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 13:01:06.919826 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 13:01:06.921350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 13:01:06.922567 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 13:01:06.923647 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 13:01:06.924704 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 13:01:06.925926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 13:01:06.927278 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 13:01:06.927779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 13:01:06.928739 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 13:01:06.929665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 13:01:06.930582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 13:01:06.930916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 13:01:06.931602 | orchestrator | 2025-06-02 13:01:06.932755 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-02 13:01:06.933074 | orchestrator | Monday 02 June 2025 13:01:06 +0000 (0:00:04.745) 0:03:14.075 *********** 2025-06-02 13:01:07.444425 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:01:07.445447 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:01:07.446289 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:01:07.447700 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:01:07.448689 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:01:07.449168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:01:07.449722 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 13:01:07.450881 | orchestrator | 2025-06-02 13:01:07.451055 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-02 13:01:07.452064 | orchestrator | Monday 02 June 2025 13:01:07 +0000 (0:00:00.549) 0:03:14.625 *********** 2025-06-02 13:01:07.503675 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:01:07.528289 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:07.602228 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:01:07.917317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:07.917420 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:01:07.917435 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:07.917447 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 13:01:07.917493 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:07.919208 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 13:01:07.919236 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 13:01:07.920122 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 13:01:07.920909 | orchestrator | 2025-06-02 13:01:07.923049 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-02 13:01:07.923380 | orchestrator | Monday 02 June 2025 13:01:07 +0000 (0:00:00.470) 0:03:15.096 *********** 2025-06-02 13:01:07.973791 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:01:08.002523 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:08.089672 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:01:08.090349 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:01:08.456665 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:08.457580 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:08.459467 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 13:01:08.460794 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:08.461748 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 13:01:08.463296 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 13:01:08.463964 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 13:01:08.465024 | orchestrator | 2025-06-02 13:01:08.465983 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-02 13:01:08.466564 | orchestrator | Monday 02 June 2025 13:01:08 +0000 (0:00:00.541) 0:03:15.637 *********** 2025-06-02 13:01:08.506221 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:08.525475 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:08.557002 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:08.590779 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:08.620672 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:08.756263 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:08.757459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:08.758915 | orchestrator | 2025-06-02 13:01:08.759932 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-02 13:01:08.760965 | orchestrator | Monday 02 June 2025 13:01:08 +0000 (0:00:00.298) 0:03:15.936 *********** 2025-06-02 13:01:14.274865 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:14.276689 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:14.277973 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:14.280590 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:14.280629 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:14.280644 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:14.281229 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:14.281885 | orchestrator | 2025-06-02 13:01:14.282627 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-02 13:01:14.283505 | orchestrator | Monday 02 June 2025 13:01:14 +0000 (0:00:05.519) 0:03:21.455 *********** 2025-06-02 13:01:14.345876 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-02 13:01:14.346107 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-02 13:01:14.379465 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:14.379692 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-02 13:01:14.414421 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:14.415183 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-02 13:01:14.447352 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:14.484824 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-02 13:01:14.484921 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:14.558267 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-02 13:01:14.558747 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:14.560208 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:14.561292 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-02 13:01:14.562199 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:14.563057 | orchestrator | 2025-06-02 13:01:14.563366 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-02 13:01:14.564171 | orchestrator | Monday 02 June 2025 13:01:14 +0000 (0:00:00.285) 0:03:21.740 *********** 2025-06-02 13:01:15.595443 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-02 13:01:15.596526 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-02 13:01:15.597191 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-02 13:01:15.598751 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-02 13:01:15.599146 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-02 13:01:15.599911 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-02 13:01:15.600486 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-02 13:01:15.601193 | orchestrator | 2025-06-02 13:01:15.601917 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-02 13:01:15.602485 | orchestrator | Monday 02 June 2025 13:01:15 +0000 (0:00:01.035) 0:03:22.775 *********** 2025-06-02 13:01:16.046265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:16.046544 | orchestrator | 2025-06-02 13:01:16.047199 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-02 13:01:16.048063 | orchestrator | Monday 02 June 2025 13:01:16 +0000 (0:00:00.450) 0:03:23.226 *********** 2025-06-02 13:01:17.392894 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:17.393726 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:17.395853 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:17.395878 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:17.395887 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:17.396398 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:17.397297 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:17.398173 | orchestrator | 2025-06-02 13:01:17.398514 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-02 13:01:17.399685 | orchestrator | Monday 02 June 2025 13:01:17 +0000 (0:00:01.347) 0:03:24.573 *********** 2025-06-02 13:01:18.004259 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:18.004362 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:18.004377 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:18.004388 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:18.004399 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:18.004768 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:18.005511 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:18.006273 | orchestrator | 2025-06-02 13:01:18.006861 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-02 13:01:18.007485 | orchestrator | Monday 02 June 2025 13:01:17 +0000 (0:00:00.607) 0:03:25.180 *********** 2025-06-02 13:01:18.596070 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:18.596289 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:18.597213 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:18.598381 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:18.598916 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:18.599826 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:18.600477 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:18.601056 | orchestrator | 2025-06-02 13:01:18.601854 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-02 13:01:18.602323 | orchestrator | Monday 02 June 2025 13:01:18 +0000 (0:00:00.596) 0:03:25.777 *********** 2025-06-02 13:01:19.185414 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:19.185518 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:19.187125 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:19.188287 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:19.189182 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:19.189557 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:19.190130 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:19.190620 | orchestrator | 2025-06-02 13:01:19.191157 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-02 13:01:19.191860 | orchestrator | Monday 02 June 2025 13:01:19 +0000 (0:00:00.587) 0:03:26.364 *********** 2025-06-02 13:01:20.254316 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867866.6769228, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.255007 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867932.563121, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.256163 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867928.671633, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.256941 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867924.9822636, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.257674 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867933.2442396, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.259235 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867924.205668, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.259575 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748867931.006373, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.260376 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867888.1669228, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.260881 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867827.2509181, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.261395 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867828.5261872, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.261970 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867827.2692895, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.263096 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867828.9500918, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.263823 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867824.7378778, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.264174 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748867831.7097962, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:01:20.264744 | orchestrator | 2025-06-02 13:01:20.265277 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-02 13:01:20.265654 | orchestrator | Monday 02 June 2025 13:01:20 +0000 (0:00:01.069) 0:03:27.434 *********** 2025-06-02 13:01:21.372864 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:21.373733 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:21.374833 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:21.376557 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:21.378050 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:21.378894 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:21.380009 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:21.380980 | orchestrator | 2025-06-02 13:01:21.381786 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-02 13:01:21.383941 | orchestrator | Monday 02 June 2025 13:01:21 +0000 (0:00:01.118) 0:03:28.552 *********** 2025-06-02 13:01:22.537283 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:22.540573 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:22.540626 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:22.543008 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:22.544171 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:22.544822 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:22.546167 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:22.546453 | orchestrator | 2025-06-02 13:01:22.547532 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-02 13:01:22.548229 | orchestrator | Monday 02 June 2025 13:01:22 +0000 (0:00:01.164) 0:03:29.716 *********** 2025-06-02 13:01:23.667262 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:23.667357 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:23.667677 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:23.668917 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:23.669367 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:23.670145 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:23.670494 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:23.671626 | orchestrator | 2025-06-02 13:01:23.672293 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-02 13:01:23.672608 | orchestrator | Monday 02 June 2025 13:01:23 +0000 (0:00:01.129) 0:03:30.845 *********** 2025-06-02 13:01:23.735958 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:23.761162 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:23.810229 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:23.842400 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:23.873282 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:23.937682 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:23.938898 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:23.939732 | orchestrator | 2025-06-02 13:01:23.940508 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-02 13:01:23.941315 | orchestrator | Monday 02 June 2025 13:01:23 +0000 (0:00:00.267) 0:03:31.113 *********** 2025-06-02 13:01:24.673191 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:24.673573 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:24.674959 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:24.676216 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:24.677469 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:24.679942 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:24.680085 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:24.681516 | orchestrator | 2025-06-02 13:01:24.682075 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-02 13:01:24.682922 | orchestrator | Monday 02 June 2025 13:01:24 +0000 (0:00:00.738) 0:03:31.852 *********** 2025-06-02 13:01:25.043087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:25.043566 | orchestrator | 2025-06-02 13:01:25.043992 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-02 13:01:25.045287 | orchestrator | Monday 02 June 2025 13:01:25 +0000 (0:00:00.370) 0:03:32.222 *********** 2025-06-02 13:01:32.397235 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:32.398924 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:32.400262 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:32.402833 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:32.403730 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:32.404810 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:32.405908 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:32.407417 | orchestrator | 2025-06-02 13:01:32.407953 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-02 13:01:32.408661 | orchestrator | Monday 02 June 2025 13:01:32 +0000 (0:00:07.354) 0:03:39.576 *********** 2025-06-02 13:01:33.617646 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:33.617755 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:33.618249 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:33.619991 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:33.620938 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:33.622139 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:33.623066 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:33.623998 | orchestrator | 2025-06-02 13:01:33.624774 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-02 13:01:33.625608 | orchestrator | Monday 02 June 2025 13:01:33 +0000 (0:00:01.220) 0:03:40.797 *********** 2025-06-02 13:01:34.698512 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:34.698635 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:34.699309 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:34.699410 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:34.703980 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:34.704029 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:34.704090 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:34.705351 | orchestrator | 2025-06-02 13:01:34.706382 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-02 13:01:34.706949 | orchestrator | Monday 02 June 2025 13:01:34 +0000 (0:00:01.081) 0:03:41.878 *********** 2025-06-02 13:01:35.146730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:35.146867 | orchestrator | 2025-06-02 13:01:35.147756 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-02 13:01:35.150832 | orchestrator | Monday 02 June 2025 13:01:35 +0000 (0:00:00.449) 0:03:42.327 *********** 2025-06-02 13:01:42.738174 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:42.738829 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:42.740440 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:42.742702 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:42.743306 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:42.744488 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:42.744962 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:42.745944 | orchestrator | 2025-06-02 13:01:42.746547 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-02 13:01:42.747483 | orchestrator | Monday 02 June 2025 13:01:42 +0000 (0:00:07.589) 0:03:49.917 *********** 2025-06-02 13:01:43.367553 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:43.367988 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:43.371110 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:43.372270 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:43.372505 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:43.373458 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:43.374564 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:43.375219 | orchestrator | 2025-06-02 13:01:43.376613 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-02 13:01:43.377590 | orchestrator | Monday 02 June 2025 13:01:43 +0000 (0:00:00.631) 0:03:50.548 *********** 2025-06-02 13:01:44.492910 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:44.494112 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:44.494892 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:44.496262 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:44.497009 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:44.498201 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:44.498807 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:44.499996 | orchestrator | 2025-06-02 13:01:44.500307 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-02 13:01:44.501321 | orchestrator | Monday 02 June 2025 13:01:44 +0000 (0:00:01.123) 0:03:51.672 *********** 2025-06-02 13:01:45.517589 | orchestrator | changed: [testbed-manager] 2025-06-02 13:01:45.518714 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:01:45.519733 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:01:45.521349 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:01:45.522975 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:01:45.524255 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:01:45.524830 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:01:45.526010 | orchestrator | 2025-06-02 13:01:45.527052 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-02 13:01:45.527585 | orchestrator | Monday 02 June 2025 13:01:45 +0000 (0:00:01.025) 0:03:52.698 *********** 2025-06-02 13:01:45.620486 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:45.656124 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:45.691627 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:45.730702 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:45.798554 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:45.799499 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:45.800139 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:45.801108 | orchestrator | 2025-06-02 13:01:45.801683 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-02 13:01:45.802244 | orchestrator | Monday 02 June 2025 13:01:45 +0000 (0:00:00.282) 0:03:52.980 *********** 2025-06-02 13:01:45.915967 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:45.946092 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:45.979178 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:46.016162 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:46.082195 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:46.082299 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:46.084049 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:46.085103 | orchestrator | 2025-06-02 13:01:46.085474 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-02 13:01:46.086641 | orchestrator | Monday 02 June 2025 13:01:46 +0000 (0:00:00.282) 0:03:53.263 *********** 2025-06-02 13:01:46.178939 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:46.214000 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:46.242914 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:46.275979 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:46.357497 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:46.358561 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:46.360398 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:46.361053 | orchestrator | 2025-06-02 13:01:46.361865 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-02 13:01:46.362916 | orchestrator | Monday 02 June 2025 13:01:46 +0000 (0:00:00.275) 0:03:53.539 *********** 2025-06-02 13:01:51.915376 | orchestrator | ok: [testbed-manager] 2025-06-02 13:01:51.915493 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:01:51.915509 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:01:51.915704 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:01:51.916627 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:01:51.917720 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:01:51.918446 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:01:51.919453 | orchestrator | 2025-06-02 13:01:51.920145 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-02 13:01:51.920876 | orchestrator | Monday 02 June 2025 13:01:51 +0000 (0:00:05.553) 0:03:59.093 *********** 2025-06-02 13:01:52.279309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:52.279917 | orchestrator | 2025-06-02 13:01:52.280524 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-02 13:01:52.281405 | orchestrator | Monday 02 June 2025 13:01:52 +0000 (0:00:00.366) 0:03:59.460 *********** 2025-06-02 13:01:52.357871 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-02 13:01:52.358207 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-02 13:01:52.358909 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-02 13:01:52.401667 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-02 13:01:52.402231 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:52.402763 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-02 13:01:52.467899 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-02 13:01:52.468257 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:52.468811 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-02 13:01:52.469524 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-02 13:01:52.503334 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:52.504012 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-02 13:01:52.547894 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:52.547941 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-02 13:01:52.548322 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-02 13:01:52.641380 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:52.641537 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-02 13:01:52.642455 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:52.643562 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-02 13:01:52.643945 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-02 13:01:52.644846 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:52.646922 | orchestrator | 2025-06-02 13:01:52.647119 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-02 13:01:52.647634 | orchestrator | Monday 02 June 2025 13:01:52 +0000 (0:00:00.361) 0:03:59.822 *********** 2025-06-02 13:01:53.001718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:53.002884 | orchestrator | 2025-06-02 13:01:53.004018 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-02 13:01:53.004698 | orchestrator | Monday 02 June 2025 13:01:52 +0000 (0:00:00.357) 0:04:00.179 *********** 2025-06-02 13:01:53.077301 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-02 13:01:53.077443 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-02 13:01:53.109223 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:01:53.161108 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:01:53.161294 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-02 13:01:53.162211 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-02 13:01:53.192578 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:01:53.232048 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:01:53.232138 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-02 13:01:53.313370 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:01:53.314069 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-02 13:01:53.314989 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:01:53.315711 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-02 13:01:53.316526 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:01:53.317826 | orchestrator | 2025-06-02 13:01:53.318370 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-02 13:01:53.319066 | orchestrator | Monday 02 June 2025 13:01:53 +0000 (0:00:00.315) 0:04:00.495 *********** 2025-06-02 13:01:53.820325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:01:53.820535 | orchestrator | 2025-06-02 13:01:53.821300 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-02 13:01:53.822118 | orchestrator | Monday 02 June 2025 13:01:53 +0000 (0:00:00.505) 0:04:01.000 *********** 2025-06-02 13:02:27.307533 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:27.307687 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:27.307705 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:27.307717 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:27.307728 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:27.307790 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:27.308940 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:27.309078 | orchestrator | 2025-06-02 13:02:27.309925 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-02 13:02:27.310580 | orchestrator | Monday 02 June 2025 13:02:27 +0000 (0:00:33.481) 0:04:34.481 *********** 2025-06-02 13:02:35.222092 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:35.222206 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:35.226788 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:35.227766 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:35.228346 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:35.228672 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:35.229577 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:35.232092 | orchestrator | 2025-06-02 13:02:35.232189 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-02 13:02:35.234356 | orchestrator | Monday 02 June 2025 13:02:35 +0000 (0:00:07.917) 0:04:42.399 *********** 2025-06-02 13:02:42.372034 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:42.372155 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:42.372850 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:42.374764 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:42.375789 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:42.376323 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:42.377055 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:42.378145 | orchestrator | 2025-06-02 13:02:42.378949 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-02 13:02:42.379927 | orchestrator | Monday 02 June 2025 13:02:42 +0000 (0:00:07.150) 0:04:49.549 *********** 2025-06-02 13:02:44.000940 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:44.001153 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:44.004448 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:44.004479 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:44.004491 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:44.004503 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:44.004870 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:44.005126 | orchestrator | 2025-06-02 13:02:44.006345 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-02 13:02:44.006825 | orchestrator | Monday 02 June 2025 13:02:43 +0000 (0:00:01.631) 0:04:51.181 *********** 2025-06-02 13:02:49.379036 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:49.379238 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:49.380947 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:49.381521 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:49.383277 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:49.384398 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:49.384774 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:49.385806 | orchestrator | 2025-06-02 13:02:49.386664 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-02 13:02:49.387241 | orchestrator | Monday 02 June 2025 13:02:49 +0000 (0:00:05.376) 0:04:56.558 *********** 2025-06-02 13:02:49.761790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:02:49.763915 | orchestrator | 2025-06-02 13:02:49.764466 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-02 13:02:49.766225 | orchestrator | Monday 02 June 2025 13:02:49 +0000 (0:00:00.383) 0:04:56.941 *********** 2025-06-02 13:02:50.513564 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:50.515876 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:50.515908 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:50.516478 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:50.517103 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:50.517744 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:50.518449 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:50.519148 | orchestrator | 2025-06-02 13:02:50.519899 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-02 13:02:50.520556 | orchestrator | Monday 02 June 2025 13:02:50 +0000 (0:00:00.751) 0:04:57.692 *********** 2025-06-02 13:02:52.064628 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:52.065929 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:52.065963 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:52.066574 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:52.067663 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:52.068227 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:52.069484 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:52.069574 | orchestrator | 2025-06-02 13:02:52.070470 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-02 13:02:52.071143 | orchestrator | Monday 02 June 2025 13:02:52 +0000 (0:00:01.551) 0:04:59.243 *********** 2025-06-02 13:02:52.885541 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:02:52.885831 | orchestrator | changed: [testbed-manager] 2025-06-02 13:02:52.886546 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:02:52.887076 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:02:52.887854 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:02:52.888534 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:02:52.888908 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:02:52.889581 | orchestrator | 2025-06-02 13:02:52.890243 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-02 13:02:52.890797 | orchestrator | Monday 02 June 2025 13:02:52 +0000 (0:00:00.822) 0:05:00.066 *********** 2025-06-02 13:02:52.976978 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:53.027757 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:53.052999 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:53.084593 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:53.146886 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:53.147825 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:53.149271 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:53.150267 | orchestrator | 2025-06-02 13:02:53.151232 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-02 13:02:53.152182 | orchestrator | Monday 02 June 2025 13:02:53 +0000 (0:00:00.261) 0:05:00.328 *********** 2025-06-02 13:02:53.217472 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:53.246498 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:53.276292 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:53.306963 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:53.346807 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:53.515689 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:53.516569 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:53.517371 | orchestrator | 2025-06-02 13:02:53.518686 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-02 13:02:53.519548 | orchestrator | Monday 02 June 2025 13:02:53 +0000 (0:00:00.367) 0:05:00.695 *********** 2025-06-02 13:02:53.622792 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:53.657159 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:53.690865 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:53.726678 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:53.805167 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:53.806274 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:53.808069 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:53.808890 | orchestrator | 2025-06-02 13:02:53.809995 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-02 13:02:53.811748 | orchestrator | Monday 02 June 2025 13:02:53 +0000 (0:00:00.290) 0:05:00.986 *********** 2025-06-02 13:02:53.905639 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:53.940346 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:53.972681 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:54.011984 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:54.082983 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:54.083097 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:54.083207 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:54.083819 | orchestrator | 2025-06-02 13:02:54.084566 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-02 13:02:54.085527 | orchestrator | Monday 02 June 2025 13:02:54 +0000 (0:00:00.277) 0:05:01.264 *********** 2025-06-02 13:02:54.188899 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:54.216141 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:54.271262 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:54.308098 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:54.374264 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:54.375524 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:54.379602 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:54.380485 | orchestrator | 2025-06-02 13:02:54.381668 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-02 13:02:54.382341 | orchestrator | Monday 02 June 2025 13:02:54 +0000 (0:00:00.290) 0:05:01.554 *********** 2025-06-02 13:02:54.475399 | orchestrator | ok: [testbed-manager] => { 2025-06-02 13:02:54.478819 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 13:02:54.478857 | orchestrator | } 2025-06-02 13:02:54.507107 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:02:54.510070 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 13:02:54.510169 | orchestrator | } 2025-06-02 13:02:54.536500 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:02:54.537610 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 13:02:54.540833 | orchestrator | } 2025-06-02 13:02:54.568804 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:02:54.569402 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 13:02:54.573243 | orchestrator | } 2025-06-02 13:02:54.645822 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 13:02:54.646358 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 13:02:54.647405 | orchestrator | } 2025-06-02 13:02:54.648641 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 13:02:54.649390 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 13:02:54.650697 | orchestrator | } 2025-06-02 13:02:54.651136 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 13:02:54.652118 | orchestrator |  "docker_version": "5:27.5.1" 2025-06-02 13:02:54.652771 | orchestrator | } 2025-06-02 13:02:54.653177 | orchestrator | 2025-06-02 13:02:54.653826 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-02 13:02:54.654480 | orchestrator | Monday 02 June 2025 13:02:54 +0000 (0:00:00.273) 0:05:01.828 *********** 2025-06-02 13:02:54.767196 | orchestrator | ok: [testbed-manager] => { 2025-06-02 13:02:54.767372 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 13:02:54.767944 | orchestrator | } 2025-06-02 13:02:54.899187 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:02:54.899898 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 13:02:54.900565 | orchestrator | } 2025-06-02 13:02:54.935749 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:02:54.938547 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 13:02:54.939885 | orchestrator | } 2025-06-02 13:02:54.969261 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:02:54.971048 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 13:02:54.971327 | orchestrator | } 2025-06-02 13:02:55.042595 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 13:02:55.042818 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 13:02:55.042966 | orchestrator | } 2025-06-02 13:02:55.043897 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 13:02:55.044003 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 13:02:55.044356 | orchestrator | } 2025-06-02 13:02:55.044965 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 13:02:55.045214 | orchestrator |  "docker_cli_version": "5:27.5.1" 2025-06-02 13:02:55.046328 | orchestrator | } 2025-06-02 13:02:55.046868 | orchestrator | 2025-06-02 13:02:55.047819 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-02 13:02:55.047851 | orchestrator | Monday 02 June 2025 13:02:55 +0000 (0:00:00.396) 0:05:02.224 *********** 2025-06-02 13:02:55.145797 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:55.184007 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:55.219544 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:55.248781 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:55.299184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:55.301477 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:55.301631 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:55.302445 | orchestrator | 2025-06-02 13:02:55.302833 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-02 13:02:55.303389 | orchestrator | Monday 02 June 2025 13:02:55 +0000 (0:00:00.254) 0:05:02.478 *********** 2025-06-02 13:02:55.401927 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:55.432159 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:55.472239 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:55.498410 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:02:55.549375 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:02:55.549567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:02:55.550434 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:02:55.551099 | orchestrator | 2025-06-02 13:02:55.552375 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-02 13:02:55.553080 | orchestrator | Monday 02 June 2025 13:02:55 +0000 (0:00:00.252) 0:05:02.730 *********** 2025-06-02 13:02:55.935965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:02:55.936674 | orchestrator | 2025-06-02 13:02:55.937427 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-02 13:02:55.938323 | orchestrator | Monday 02 June 2025 13:02:55 +0000 (0:00:00.386) 0:05:03.116 *********** 2025-06-02 13:02:56.754572 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:56.755420 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:56.756612 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:56.757475 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:56.758094 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:56.759371 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:56.759987 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:56.760621 | orchestrator | 2025-06-02 13:02:56.761068 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-02 13:02:56.761683 | orchestrator | Monday 02 June 2025 13:02:56 +0000 (0:00:00.816) 0:05:03.933 *********** 2025-06-02 13:02:59.457820 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:02:59.458776 | orchestrator | ok: [testbed-manager] 2025-06-02 13:02:59.459886 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:02:59.460395 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:02:59.462064 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:02:59.462493 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:02:59.463217 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:02:59.464089 | orchestrator | 2025-06-02 13:02:59.464398 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-02 13:02:59.465047 | orchestrator | Monday 02 June 2025 13:02:59 +0000 (0:00:02.704) 0:05:06.637 *********** 2025-06-02 13:02:59.527042 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-02 13:02:59.528186 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-02 13:02:59.591272 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-02 13:02:59.592379 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-02 13:02:59.659664 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-02 13:02:59.660862 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:02:59.662447 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-02 13:02:59.663821 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-02 13:02:59.664909 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-02 13:02:59.869530 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:02:59.870174 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-02 13:02:59.871387 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-02 13:02:59.872611 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-02 13:02:59.873765 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-02 13:02:59.935640 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:02:59.935786 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-02 13:02:59.936802 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-02 13:03:00.002872 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:00.003566 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-02 13:03:00.004890 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-02 13:03:00.005530 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-02 13:03:00.006750 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-02 13:03:00.141236 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:00.141430 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:00.142913 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-02 13:03:00.145770 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-02 13:03:00.145795 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-02 13:03:00.145807 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:00.145819 | orchestrator | 2025-06-02 13:03:00.146152 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-02 13:03:00.147233 | orchestrator | Monday 02 June 2025 13:03:00 +0000 (0:00:00.683) 0:05:07.321 *********** 2025-06-02 13:03:05.902377 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:05.902932 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:05.904812 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:05.907147 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:05.908300 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:05.908898 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:05.909980 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:05.910870 | orchestrator | 2025-06-02 13:03:05.911367 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-02 13:03:05.912240 | orchestrator | Monday 02 June 2025 13:03:05 +0000 (0:00:05.760) 0:05:13.081 *********** 2025-06-02 13:03:06.926225 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:06.926375 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:06.927745 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:06.927851 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:06.928919 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:06.929358 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:06.931271 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:06.931791 | orchestrator | 2025-06-02 13:03:06.932476 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-02 13:03:06.932978 | orchestrator | Monday 02 June 2025 13:03:06 +0000 (0:00:01.025) 0:05:14.106 *********** 2025-06-02 13:03:14.041967 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:14.042982 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:14.043670 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:14.044602 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:14.046489 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:14.047565 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:14.048102 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:14.049259 | orchestrator | 2025-06-02 13:03:14.050254 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-02 13:03:14.050733 | orchestrator | Monday 02 June 2025 13:03:14 +0000 (0:00:07.114) 0:05:21.221 *********** 2025-06-02 13:03:17.133254 | orchestrator | changed: [testbed-manager] 2025-06-02 13:03:17.135780 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:17.135884 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:17.137278 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:17.138553 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:17.139488 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:17.140373 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:17.141025 | orchestrator | 2025-06-02 13:03:17.141679 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-02 13:03:17.142287 | orchestrator | Monday 02 June 2025 13:03:17 +0000 (0:00:03.089) 0:05:24.311 *********** 2025-06-02 13:03:18.631820 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:18.631905 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:18.631919 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:18.632578 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:18.634128 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:18.634968 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:18.636115 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:18.637130 | orchestrator | 2025-06-02 13:03:18.638079 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-02 13:03:18.638820 | orchestrator | Monday 02 June 2025 13:03:18 +0000 (0:00:01.497) 0:05:25.809 *********** 2025-06-02 13:03:19.992132 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:19.992316 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:19.995197 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:19.995728 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:19.996377 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:19.996881 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:19.998587 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:20.000048 | orchestrator | 2025-06-02 13:03:20.000567 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-02 13:03:20.000673 | orchestrator | Monday 02 June 2025 13:03:19 +0000 (0:00:01.362) 0:05:27.172 *********** 2025-06-02 13:03:20.186203 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:20.250282 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:20.312845 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:20.414085 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:20.552688 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:20.553355 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:20.553924 | orchestrator | changed: [testbed-manager] 2025-06-02 13:03:20.555342 | orchestrator | 2025-06-02 13:03:20.555882 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-02 13:03:20.556779 | orchestrator | Monday 02 June 2025 13:03:20 +0000 (0:00:00.560) 0:05:27.732 *********** 2025-06-02 13:03:29.660529 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:29.660868 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:29.661800 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:29.665163 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:29.665226 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:29.666218 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:29.668088 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:29.668560 | orchestrator | 2025-06-02 13:03:29.669175 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-02 13:03:29.669890 | orchestrator | Monday 02 June 2025 13:03:29 +0000 (0:00:09.107) 0:05:36.839 *********** 2025-06-02 13:03:30.532356 | orchestrator | changed: [testbed-manager] 2025-06-02 13:03:30.533301 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:30.533333 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:30.534674 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:30.535498 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:30.536313 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:30.537658 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:30.538958 | orchestrator | 2025-06-02 13:03:30.539320 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-02 13:03:30.540023 | orchestrator | Monday 02 June 2025 13:03:30 +0000 (0:00:00.872) 0:05:37.712 *********** 2025-06-02 13:03:38.938787 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:38.940747 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:38.940790 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:38.940842 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:38.940905 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:38.941739 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:38.943220 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:38.943876 | orchestrator | 2025-06-02 13:03:38.945245 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-02 13:03:38.945773 | orchestrator | Monday 02 June 2025 13:03:38 +0000 (0:00:08.407) 0:05:46.119 *********** 2025-06-02 13:03:49.139092 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:49.139592 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:49.140452 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:49.140998 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:49.141294 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:49.141741 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:49.142420 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:49.144624 | orchestrator | 2025-06-02 13:03:49.144647 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-02 13:03:49.144660 | orchestrator | Monday 02 June 2025 13:03:49 +0000 (0:00:10.200) 0:05:56.319 *********** 2025-06-02 13:03:49.481167 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-02 13:03:50.306905 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-02 13:03:50.307834 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-02 13:03:50.308655 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-02 13:03:50.309203 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-02 13:03:50.310426 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-02 13:03:50.310892 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-02 13:03:50.312019 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-02 13:03:50.312712 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-02 13:03:50.313658 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-02 13:03:50.313994 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-02 13:03:50.315318 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-02 13:03:50.316606 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-02 13:03:50.317265 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-02 13:03:50.317878 | orchestrator | 2025-06-02 13:03:50.318518 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-02 13:03:50.319537 | orchestrator | Monday 02 June 2025 13:03:50 +0000 (0:00:01.167) 0:05:57.486 *********** 2025-06-02 13:03:50.490140 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:50.548929 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:50.615903 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:50.676181 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:50.736038 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:50.847819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:50.848016 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:50.850357 | orchestrator | 2025-06-02 13:03:50.851271 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-02 13:03:50.852329 | orchestrator | Monday 02 June 2025 13:03:50 +0000 (0:00:00.542) 0:05:58.029 *********** 2025-06-02 13:03:54.427227 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:54.427349 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:03:54.428780 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:03:54.429525 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:03:54.430810 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:03:54.431282 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:03:54.432059 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:03:54.433491 | orchestrator | 2025-06-02 13:03:54.434426 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-02 13:03:54.435019 | orchestrator | Monday 02 June 2025 13:03:54 +0000 (0:00:03.575) 0:06:01.605 *********** 2025-06-02 13:03:54.556883 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:54.618944 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:54.681399 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:54.754611 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:54.820353 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:54.906589 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:54.906826 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:54.906849 | orchestrator | 2025-06-02 13:03:54.906864 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-02 13:03:54.907058 | orchestrator | Monday 02 June 2025 13:03:54 +0000 (0:00:00.482) 0:06:02.087 *********** 2025-06-02 13:03:54.991953 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-02 13:03:54.992041 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-02 13:03:55.058597 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:55.058813 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-02 13:03:55.058835 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-02 13:03:55.125223 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:55.125727 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-02 13:03:55.129245 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-02 13:03:55.196775 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:55.197056 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-02 13:03:55.200761 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-02 13:03:55.261983 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:55.262422 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-02 13:03:55.263076 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-02 13:03:55.329651 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:55.330383 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-02 13:03:55.330595 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-02 13:03:55.435247 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:55.436116 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-02 13:03:55.438857 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-02 13:03:55.439424 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:55.440562 | orchestrator | 2025-06-02 13:03:55.441568 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-02 13:03:55.442103 | orchestrator | Monday 02 June 2025 13:03:55 +0000 (0:00:00.529) 0:06:02.617 *********** 2025-06-02 13:03:55.565822 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:55.643884 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:55.700826 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:55.763811 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:55.829927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:55.923018 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:55.923530 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:55.926742 | orchestrator | 2025-06-02 13:03:55.926790 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-02 13:03:55.926805 | orchestrator | Monday 02 June 2025 13:03:55 +0000 (0:00:00.485) 0:06:03.103 *********** 2025-06-02 13:03:56.046852 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:56.105364 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:56.163327 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:56.229348 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:56.288919 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:56.386417 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:56.387010 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:56.387818 | orchestrator | 2025-06-02 13:03:56.388452 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-02 13:03:56.390091 | orchestrator | Monday 02 June 2025 13:03:56 +0000 (0:00:00.463) 0:06:03.566 *********** 2025-06-02 13:03:56.533930 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:03:56.593335 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:03:56.823874 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:03:56.888368 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:03:56.950159 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:03:57.069536 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:03:57.070492 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:03:57.073861 | orchestrator | 2025-06-02 13:03:57.073907 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-02 13:03:57.074072 | orchestrator | Monday 02 June 2025 13:03:57 +0000 (0:00:00.682) 0:06:04.249 *********** 2025-06-02 13:03:58.675089 | orchestrator | ok: [testbed-manager] 2025-06-02 13:03:58.675196 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:03:58.675873 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:03:58.677399 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:03:58.678506 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:03:58.680838 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:03:58.681768 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:03:58.682912 | orchestrator | 2025-06-02 13:03:58.684031 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-02 13:03:58.685034 | orchestrator | Monday 02 June 2025 13:03:58 +0000 (0:00:01.602) 0:06:05.851 *********** 2025-06-02 13:03:59.507574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:03:59.509157 | orchestrator | 2025-06-02 13:03:59.510846 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-02 13:03:59.512167 | orchestrator | Monday 02 June 2025 13:03:59 +0000 (0:00:00.835) 0:06:06.687 *********** 2025-06-02 13:04:00.312506 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:00.312617 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:00.312729 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:00.313883 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:00.314943 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:00.315472 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:00.316067 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:00.316844 | orchestrator | 2025-06-02 13:04:00.317512 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-02 13:04:00.318008 | orchestrator | Monday 02 June 2025 13:04:00 +0000 (0:00:00.803) 0:06:07.490 *********** 2025-06-02 13:04:00.754504 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:00.828397 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:01.326728 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:01.327524 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:01.328457 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:01.329392 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:01.330419 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:01.331200 | orchestrator | 2025-06-02 13:04:01.332049 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-02 13:04:01.332978 | orchestrator | Monday 02 June 2025 13:04:01 +0000 (0:00:01.016) 0:06:08.507 *********** 2025-06-02 13:04:02.637072 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:02.638384 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:02.639949 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:02.640975 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:02.642270 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:02.643489 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:02.644220 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:02.645189 | orchestrator | 2025-06-02 13:04:02.645808 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-02 13:04:02.646817 | orchestrator | Monday 02 June 2025 13:04:02 +0000 (0:00:01.310) 0:06:09.817 *********** 2025-06-02 13:04:02.760158 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:04.000310 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:04.000484 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:04.001978 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:04.004510 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:04.005724 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:04.008019 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:04.008044 | orchestrator | 2025-06-02 13:04:04.008058 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-02 13:04:04.009191 | orchestrator | Monday 02 June 2025 13:04:03 +0000 (0:00:01.361) 0:06:11.178 *********** 2025-06-02 13:04:05.313405 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:05.357237 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:05.357306 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:05.357315 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:05.357324 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:05.357332 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:05.357339 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:05.357347 | orchestrator | 2025-06-02 13:04:05.357355 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-02 13:04:05.357386 | orchestrator | Monday 02 June 2025 13:04:05 +0000 (0:00:01.314) 0:06:12.492 *********** 2025-06-02 13:04:06.766323 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:06.767700 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:06.769536 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:06.771328 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:06.773658 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:06.775031 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:06.776139 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:06.777110 | orchestrator | 2025-06-02 13:04:06.780017 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-02 13:04:06.780612 | orchestrator | Monday 02 June 2025 13:04:06 +0000 (0:00:01.451) 0:06:13.944 *********** 2025-06-02 13:04:07.558740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:07.559737 | orchestrator | 2025-06-02 13:04:07.561148 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-02 13:04:07.563049 | orchestrator | Monday 02 June 2025 13:04:07 +0000 (0:00:00.791) 0:06:14.736 *********** 2025-06-02 13:04:08.892547 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:08.893117 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:08.893782 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:08.894596 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:08.895551 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:08.896012 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:08.896826 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:08.897747 | orchestrator | 2025-06-02 13:04:08.898514 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-02 13:04:08.899124 | orchestrator | Monday 02 June 2025 13:04:08 +0000 (0:00:01.337) 0:06:16.073 *********** 2025-06-02 13:04:09.991584 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:09.991739 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:09.993037 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:09.993804 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:09.994651 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:09.995715 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:09.996102 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:09.996783 | orchestrator | 2025-06-02 13:04:09.997412 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-02 13:04:09.998634 | orchestrator | Monday 02 June 2025 13:04:09 +0000 (0:00:01.095) 0:06:17.169 *********** 2025-06-02 13:04:11.244475 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:11.245335 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:11.246701 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:11.247814 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:11.248565 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:11.249983 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:11.250700 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:11.252181 | orchestrator | 2025-06-02 13:04:11.253565 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-02 13:04:11.253998 | orchestrator | Monday 02 June 2025 13:04:11 +0000 (0:00:01.255) 0:06:18.424 *********** 2025-06-02 13:04:12.360656 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:12.360872 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:12.365713 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:12.365757 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:12.365770 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:12.365782 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:12.365793 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:12.365805 | orchestrator | 2025-06-02 13:04:12.367543 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-02 13:04:12.367614 | orchestrator | Monday 02 June 2025 13:04:12 +0000 (0:00:01.113) 0:06:19.538 *********** 2025-06-02 13:04:13.464876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:13.465065 | orchestrator | 2025-06-02 13:04:13.465770 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:04:13.466775 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.834) 0:06:20.372 *********** 2025-06-02 13:04:13.467954 | orchestrator | 2025-06-02 13:04:13.468980 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:04:13.470243 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.038) 0:06:20.410 *********** 2025-06-02 13:04:13.470984 | orchestrator | 2025-06-02 13:04:13.471792 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:04:13.473900 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.043) 0:06:20.454 *********** 2025-06-02 13:04:13.474315 | orchestrator | 2025-06-02 13:04:13.474335 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:04:13.474600 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.036) 0:06:20.491 *********** 2025-06-02 13:04:13.475361 | orchestrator | 2025-06-02 13:04:13.475874 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:04:13.476338 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.036) 0:06:20.527 *********** 2025-06-02 13:04:13.476828 | orchestrator | 2025-06-02 13:04:13.477367 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:04:13.477844 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.042) 0:06:20.570 *********** 2025-06-02 13:04:13.478392 | orchestrator | 2025-06-02 13:04:13.478797 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 13:04:13.479285 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.037) 0:06:20.607 *********** 2025-06-02 13:04:13.479730 | orchestrator | 2025-06-02 13:04:13.480007 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 13:04:13.480380 | orchestrator | Monday 02 June 2025 13:04:13 +0000 (0:00:00.036) 0:06:20.643 *********** 2025-06-02 13:04:14.743032 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:14.743155 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:14.744278 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:14.744727 | orchestrator | 2025-06-02 13:04:14.745446 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-02 13:04:14.746314 | orchestrator | Monday 02 June 2025 13:04:14 +0000 (0:00:01.277) 0:06:21.920 *********** 2025-06-02 13:04:16.001120 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:16.003793 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:16.003867 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:16.003938 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:16.004976 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:16.006305 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:16.006448 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:16.007005 | orchestrator | 2025-06-02 13:04:16.007611 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-02 13:04:16.008138 | orchestrator | Monday 02 June 2025 13:04:15 +0000 (0:00:01.260) 0:06:23.181 *********** 2025-06-02 13:04:17.102462 | orchestrator | changed: [testbed-manager] 2025-06-02 13:04:17.103460 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:17.105488 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:17.106086 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:17.107142 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:17.108456 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:17.108980 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:17.109657 | orchestrator | 2025-06-02 13:04:17.110571 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-02 13:04:17.111305 | orchestrator | Monday 02 June 2025 13:04:17 +0000 (0:00:01.099) 0:06:24.281 *********** 2025-06-02 13:04:17.229876 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:19.499187 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:19.501136 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:19.501238 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:19.502879 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:19.504776 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:19.505586 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:19.506889 | orchestrator | 2025-06-02 13:04:19.508226 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-02 13:04:19.508594 | orchestrator | Monday 02 June 2025 13:04:19 +0000 (0:00:02.397) 0:06:26.679 *********** 2025-06-02 13:04:19.597593 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:19.598146 | orchestrator | 2025-06-02 13:04:19.599366 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-02 13:04:19.599989 | orchestrator | Monday 02 June 2025 13:04:19 +0000 (0:00:00.098) 0:06:26.777 *********** 2025-06-02 13:04:20.561066 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:20.561264 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:20.564043 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:20.564790 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:20.565206 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:20.566430 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:20.567416 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:20.568229 | orchestrator | 2025-06-02 13:04:20.569099 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-02 13:04:20.569780 | orchestrator | Monday 02 June 2025 13:04:20 +0000 (0:00:00.962) 0:06:27.740 *********** 2025-06-02 13:04:20.841094 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:20.901067 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:20.970115 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:21.031574 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:21.091766 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:21.208516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:21.208813 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:21.209457 | orchestrator | 2025-06-02 13:04:21.210171 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-02 13:04:21.210883 | orchestrator | Monday 02 June 2025 13:04:21 +0000 (0:00:00.649) 0:06:28.390 *********** 2025-06-02 13:04:22.042900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:22.043696 | orchestrator | 2025-06-02 13:04:22.044109 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-02 13:04:22.045839 | orchestrator | Monday 02 June 2025 13:04:22 +0000 (0:00:00.831) 0:06:29.221 *********** 2025-06-02 13:04:22.501041 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:22.923230 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:22.923427 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:22.925967 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:22.926517 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:22.927581 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:22.928447 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:22.929022 | orchestrator | 2025-06-02 13:04:22.929729 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-02 13:04:22.930751 | orchestrator | Monday 02 June 2025 13:04:22 +0000 (0:00:00.880) 0:06:30.101 *********** 2025-06-02 13:04:25.473040 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-02 13:04:25.473182 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-02 13:04:25.474677 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-02 13:04:25.475103 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-02 13:04:25.477160 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-02 13:04:25.477259 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-02 13:04:25.478099 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-02 13:04:25.478718 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-02 13:04:25.479347 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-02 13:04:25.480210 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-02 13:04:25.481099 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-02 13:04:25.481833 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-02 13:04:25.482475 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-02 13:04:25.483035 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-02 13:04:25.483720 | orchestrator | 2025-06-02 13:04:25.484231 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-02 13:04:25.484647 | orchestrator | Monday 02 June 2025 13:04:25 +0000 (0:00:02.544) 0:06:32.646 *********** 2025-06-02 13:04:25.606217 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:25.668558 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:25.739740 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:25.799050 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:25.861150 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:25.956359 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:25.956840 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:25.957454 | orchestrator | 2025-06-02 13:04:25.958101 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-02 13:04:25.958565 | orchestrator | Monday 02 June 2025 13:04:25 +0000 (0:00:00.492) 0:06:33.139 *********** 2025-06-02 13:04:26.720164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:26.720294 | orchestrator | 2025-06-02 13:04:26.720508 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-02 13:04:26.720528 | orchestrator | Monday 02 June 2025 13:04:26 +0000 (0:00:00.756) 0:06:33.896 *********** 2025-06-02 13:04:27.242548 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:27.310727 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:27.728641 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:27.729094 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:27.730945 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:27.731439 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:27.732369 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:27.733197 | orchestrator | 2025-06-02 13:04:27.733581 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-02 13:04:27.734841 | orchestrator | Monday 02 June 2025 13:04:27 +0000 (0:00:01.010) 0:06:34.907 *********** 2025-06-02 13:04:28.153224 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:28.518797 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:28.519384 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:28.520999 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:28.521777 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:28.522540 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:28.523867 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:28.524347 | orchestrator | 2025-06-02 13:04:28.525246 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-02 13:04:28.525894 | orchestrator | Monday 02 June 2025 13:04:28 +0000 (0:00:00.790) 0:06:35.697 *********** 2025-06-02 13:04:28.648151 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:28.707901 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:28.770149 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:28.836354 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:28.897419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:28.977967 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:28.978582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:28.982407 | orchestrator | 2025-06-02 13:04:28.982924 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-02 13:04:28.984902 | orchestrator | Monday 02 June 2025 13:04:28 +0000 (0:00:00.459) 0:06:36.157 *********** 2025-06-02 13:04:30.310734 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:30.311224 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:30.312960 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:30.313235 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:30.314496 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:30.315250 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:30.316087 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:30.316577 | orchestrator | 2025-06-02 13:04:30.317417 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-02 13:04:30.318259 | orchestrator | Monday 02 June 2025 13:04:30 +0000 (0:00:01.333) 0:06:37.490 *********** 2025-06-02 13:04:30.445177 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:30.510761 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:30.568857 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:30.629748 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:30.696028 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:30.773440 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:30.773874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:30.774886 | orchestrator | 2025-06-02 13:04:30.775738 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-02 13:04:30.776572 | orchestrator | Monday 02 June 2025 13:04:30 +0000 (0:00:00.462) 0:06:37.953 *********** 2025-06-02 13:04:37.844397 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:37.848154 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:37.848191 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:37.848204 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:37.848215 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:37.849369 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:37.851067 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:37.852291 | orchestrator | 2025-06-02 13:04:37.853573 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-02 13:04:37.853887 | orchestrator | Monday 02 June 2025 13:04:37 +0000 (0:00:07.068) 0:06:45.022 *********** 2025-06-02 13:04:39.140092 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:39.140897 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:39.141851 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:39.143130 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:39.143769 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:39.144481 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:39.145427 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:39.146381 | orchestrator | 2025-06-02 13:04:39.147012 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-02 13:04:39.147712 | orchestrator | Monday 02 June 2025 13:04:39 +0000 (0:00:01.298) 0:06:46.320 *********** 2025-06-02 13:04:40.779223 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:40.780913 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:40.781595 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:40.782517 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:40.783952 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:40.785039 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:40.785840 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:40.787405 | orchestrator | 2025-06-02 13:04:40.788078 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-02 13:04:40.788917 | orchestrator | Monday 02 June 2025 13:04:40 +0000 (0:00:01.637) 0:06:47.958 *********** 2025-06-02 13:04:42.487223 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:42.487354 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:04:42.488509 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:04:42.489968 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:04:42.491558 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:04:42.491626 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:04:42.492726 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:04:42.493354 | orchestrator | 2025-06-02 13:04:42.494291 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 13:04:42.494666 | orchestrator | Monday 02 June 2025 13:04:42 +0000 (0:00:01.704) 0:06:49.662 *********** 2025-06-02 13:04:42.890244 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:43.292593 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:43.292768 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:43.293262 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:43.293984 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:43.294844 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:43.295534 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:43.296182 | orchestrator | 2025-06-02 13:04:43.296694 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 13:04:43.297345 | orchestrator | Monday 02 June 2025 13:04:43 +0000 (0:00:00.810) 0:06:50.473 *********** 2025-06-02 13:04:43.423451 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:43.482610 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:43.545026 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:43.611761 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:43.675856 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:44.063385 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:44.064707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:44.066974 | orchestrator | 2025-06-02 13:04:44.067000 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-02 13:04:44.068430 | orchestrator | Monday 02 June 2025 13:04:44 +0000 (0:00:00.768) 0:06:51.241 *********** 2025-06-02 13:04:44.180778 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:44.247795 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:44.308792 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:44.367440 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:44.435890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:44.524790 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:44.525890 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:44.527170 | orchestrator | 2025-06-02 13:04:44.528390 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-02 13:04:44.529425 | orchestrator | Monday 02 June 2025 13:04:44 +0000 (0:00:00.462) 0:06:51.703 *********** 2025-06-02 13:04:44.661878 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:44.727455 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:44.797520 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:45.016777 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:45.081490 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:45.188114 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:45.188315 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:45.189240 | orchestrator | 2025-06-02 13:04:45.190591 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-02 13:04:45.191600 | orchestrator | Monday 02 June 2025 13:04:45 +0000 (0:00:00.663) 0:06:52.367 *********** 2025-06-02 13:04:45.318872 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:45.378867 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:45.444833 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:45.507096 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:45.570099 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:45.671218 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:45.671917 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:45.672953 | orchestrator | 2025-06-02 13:04:45.674305 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-02 13:04:45.674784 | orchestrator | Monday 02 June 2025 13:04:45 +0000 (0:00:00.485) 0:06:52.852 *********** 2025-06-02 13:04:45.797338 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:45.865408 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:45.926552 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:45.987075 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:46.052703 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:46.155138 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:46.156279 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:46.157230 | orchestrator | 2025-06-02 13:04:46.158880 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-02 13:04:46.159993 | orchestrator | Monday 02 June 2025 13:04:46 +0000 (0:00:00.482) 0:06:53.335 *********** 2025-06-02 13:04:51.682987 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:51.683993 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:51.685722 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:51.686777 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:51.688220 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:51.688960 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:51.690138 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:51.691107 | orchestrator | 2025-06-02 13:04:51.691788 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-02 13:04:51.692835 | orchestrator | Monday 02 June 2025 13:04:51 +0000 (0:00:05.528) 0:06:58.863 *********** 2025-06-02 13:04:51.814833 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:04:51.875230 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:04:51.934245 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:04:52.001939 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:04:52.062374 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:04:52.171910 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:04:52.172479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:04:52.173432 | orchestrator | 2025-06-02 13:04:52.179585 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-02 13:04:52.179633 | orchestrator | Monday 02 June 2025 13:04:52 +0000 (0:00:00.488) 0:06:59.352 *********** 2025-06-02 13:04:53.092242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:53.092413 | orchestrator | 2025-06-02 13:04:53.092912 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-02 13:04:53.093390 | orchestrator | Monday 02 June 2025 13:04:53 +0000 (0:00:00.920) 0:07:00.273 *********** 2025-06-02 13:04:54.808447 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:54.808949 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:54.809805 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:54.810976 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:54.812145 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:54.812934 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:54.813463 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:54.814114 | orchestrator | 2025-06-02 13:04:54.814396 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-02 13:04:54.815033 | orchestrator | Monday 02 June 2025 13:04:54 +0000 (0:00:01.713) 0:07:01.986 *********** 2025-06-02 13:04:55.905372 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:55.905534 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:55.907004 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:55.907771 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:55.908726 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:55.910139 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:55.911198 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:55.911931 | orchestrator | 2025-06-02 13:04:55.912813 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-02 13:04:55.913583 | orchestrator | Monday 02 June 2025 13:04:55 +0000 (0:00:01.095) 0:07:03.082 *********** 2025-06-02 13:04:56.495304 | orchestrator | ok: [testbed-manager] 2025-06-02 13:04:56.918411 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:04:56.919611 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:04:56.919883 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:04:56.921099 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:04:56.921312 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:04:56.923764 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:04:56.924429 | orchestrator | 2025-06-02 13:04:56.925713 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-02 13:04:56.926799 | orchestrator | Monday 02 June 2025 13:04:56 +0000 (0:00:01.011) 0:07:04.094 *********** 2025-06-02 13:04:58.545398 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:58.545809 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:58.549595 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:58.550178 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:58.550958 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:58.551609 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:58.552178 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 13:04:58.553794 | orchestrator | 2025-06-02 13:04:58.554449 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-02 13:04:58.555079 | orchestrator | Monday 02 June 2025 13:04:58 +0000 (0:00:01.630) 0:07:05.724 *********** 2025-06-02 13:04:59.311921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:04:59.312048 | orchestrator | 2025-06-02 13:04:59.312144 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-02 13:04:59.313039 | orchestrator | Monday 02 June 2025 13:04:59 +0000 (0:00:00.766) 0:07:06.491 *********** 2025-06-02 13:05:07.876185 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:07.878614 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:07.878685 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:07.880930 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:07.882119 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:07.883311 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:07.885435 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:07.885455 | orchestrator | 2025-06-02 13:05:07.886502 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-02 13:05:07.887528 | orchestrator | Monday 02 June 2025 13:05:07 +0000 (0:00:08.562) 0:07:15.054 *********** 2025-06-02 13:05:09.527001 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:09.527402 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:09.532276 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:09.532303 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:09.532315 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:09.532325 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:09.532337 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:09.532897 | orchestrator | 2025-06-02 13:05:09.533777 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-02 13:05:09.534368 | orchestrator | Monday 02 June 2025 13:05:09 +0000 (0:00:01.651) 0:07:16.705 *********** 2025-06-02 13:05:10.764396 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:10.765959 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:10.766240 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:10.767271 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:10.768171 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:10.769239 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:10.770142 | orchestrator | 2025-06-02 13:05:10.770785 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-02 13:05:10.771796 | orchestrator | Monday 02 June 2025 13:05:10 +0000 (0:00:01.235) 0:07:17.941 *********** 2025-06-02 13:05:12.133939 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:12.134118 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:12.134168 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:12.135286 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:12.137168 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:12.137994 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:12.138555 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:12.139582 | orchestrator | 2025-06-02 13:05:12.140301 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-02 13:05:12.141274 | orchestrator | 2025-06-02 13:05:12.141654 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-02 13:05:12.142393 | orchestrator | Monday 02 June 2025 13:05:12 +0000 (0:00:01.370) 0:07:19.311 *********** 2025-06-02 13:05:12.269129 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:12.325616 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:12.388798 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:12.449001 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:12.525315 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:12.642459 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:12.642841 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:12.643370 | orchestrator | 2025-06-02 13:05:12.644119 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-02 13:05:12.644663 | orchestrator | 2025-06-02 13:05:12.645250 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-02 13:05:12.645711 | orchestrator | Monday 02 June 2025 13:05:12 +0000 (0:00:00.511) 0:07:19.823 *********** 2025-06-02 13:05:13.896846 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:13.897983 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:13.898420 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:13.899087 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:13.900198 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:13.900961 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:13.901575 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:13.902218 | orchestrator | 2025-06-02 13:05:13.902864 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-02 13:05:13.903440 | orchestrator | Monday 02 June 2025 13:05:13 +0000 (0:00:01.251) 0:07:21.074 *********** 2025-06-02 13:05:15.421980 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:15.422722 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:15.423547 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:15.427532 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:15.427958 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:15.428807 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:15.429981 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:15.430338 | orchestrator | 2025-06-02 13:05:15.430909 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-02 13:05:15.431474 | orchestrator | Monday 02 June 2025 13:05:15 +0000 (0:00:01.525) 0:07:22.600 *********** 2025-06-02 13:05:15.539111 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:15.604995 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:15.666436 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:15.727998 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:15.794985 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:16.166280 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:16.167035 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:16.168344 | orchestrator | 2025-06-02 13:05:16.171768 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-02 13:05:16.171809 | orchestrator | Monday 02 June 2025 13:05:16 +0000 (0:00:00.745) 0:07:23.346 *********** 2025-06-02 13:05:17.384702 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:17.385601 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:17.386304 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:17.388017 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:17.389405 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:17.390491 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:17.391137 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:17.392084 | orchestrator | 2025-06-02 13:05:17.392704 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-02 13:05:17.395123 | orchestrator | 2025-06-02 13:05:17.395171 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-02 13:05:17.395187 | orchestrator | Monday 02 June 2025 13:05:17 +0000 (0:00:01.217) 0:07:24.564 *********** 2025-06-02 13:05:18.297865 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:05:18.301569 | orchestrator | 2025-06-02 13:05:18.301637 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 13:05:18.301653 | orchestrator | Monday 02 June 2025 13:05:18 +0000 (0:00:00.912) 0:07:25.476 *********** 2025-06-02 13:05:18.712018 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:19.092866 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:19.093030 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:19.094123 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:19.095017 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:19.095563 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:19.096820 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:19.096850 | orchestrator | 2025-06-02 13:05:19.098352 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 13:05:19.098824 | orchestrator | Monday 02 June 2025 13:05:19 +0000 (0:00:00.795) 0:07:26.272 *********** 2025-06-02 13:05:20.187331 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:20.189089 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:20.189769 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:20.191542 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:20.192856 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:20.194325 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:20.195954 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:20.196253 | orchestrator | 2025-06-02 13:05:20.197726 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-02 13:05:20.198534 | orchestrator | Monday 02 June 2025 13:05:20 +0000 (0:00:01.094) 0:07:27.366 *********** 2025-06-02 13:05:21.118931 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:05:21.120267 | orchestrator | 2025-06-02 13:05:21.120848 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 13:05:21.122078 | orchestrator | Monday 02 June 2025 13:05:21 +0000 (0:00:00.931) 0:07:28.298 *********** 2025-06-02 13:05:21.534273 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:21.988024 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:21.988368 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:21.989569 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:21.990089 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:21.991000 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:21.991741 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:21.992209 | orchestrator | 2025-06-02 13:05:21.993022 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 13:05:21.993674 | orchestrator | Monday 02 June 2025 13:05:21 +0000 (0:00:00.870) 0:07:29.168 *********** 2025-06-02 13:05:23.042693 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:23.044723 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:23.046104 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:23.047241 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:23.048380 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:23.049335 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:23.050533 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:23.051325 | orchestrator | 2025-06-02 13:05:23.053118 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:05:23.053164 | orchestrator | 2025-06-02 13:05:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:05:23.053180 | orchestrator | 2025-06-02 13:05:23 | INFO  | Please wait and do not abort execution. 2025-06-02 13:05:23.053555 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-02 13:05:23.054692 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:05:23.055317 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:05:23.056307 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:05:23.056988 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-02 13:05:23.057575 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:05:23.058486 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 13:05:23.059851 | orchestrator | 2025-06-02 13:05:23.061633 | orchestrator | 2025-06-02 13:05:23.065888 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:05:23.066073 | orchestrator | Monday 02 June 2025 13:05:23 +0000 (0:00:01.055) 0:07:30.223 *********** 2025-06-02 13:05:23.067554 | orchestrator | =============================================================================== 2025-06-02 13:05:23.068156 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.78s 2025-06-02 13:05:23.068923 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.99s 2025-06-02 13:05:23.070366 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.48s 2025-06-02 13:05:23.070912 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.33s 2025-06-02 13:05:23.071864 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.10s 2025-06-02 13:05:23.072997 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.79s 2025-06-02 13:05:23.073748 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.20s 2025-06-02 13:05:23.074751 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.11s 2025-06-02 13:05:23.075595 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.56s 2025-06-02 13:05:23.076323 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.41s 2025-06-02 13:05:23.077238 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.92s 2025-06-02 13:05:23.078164 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.59s 2025-06-02 13:05:23.079276 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.35s 2025-06-02 13:05:23.080863 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.15s 2025-06-02 13:05:23.081568 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.11s 2025-06-02 13:05:23.082689 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.07s 2025-06-02 13:05:23.085280 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.76s 2025-06-02 13:05:23.085934 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.55s 2025-06-02 13:05:23.086939 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.53s 2025-06-02 13:05:23.087546 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.52s 2025-06-02 13:05:23.714759 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 13:05:23.714850 | orchestrator | + osism apply network 2025-06-02 13:05:25.770756 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:05:25.770848 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:05:25.770862 | orchestrator | Registering Redlock._release_script 2025-06-02 13:05:25.833093 | orchestrator | 2025-06-02 13:05:25 | INFO  | Task c64462bd-e065-4638-bb52-bb0e55426412 (network) was prepared for execution. 2025-06-02 13:05:25.833144 | orchestrator | 2025-06-02 13:05:25 | INFO  | It takes a moment until task c64462bd-e065-4638-bb52-bb0e55426412 (network) has been started and output is visible here. 2025-06-02 13:05:29.896776 | orchestrator | 2025-06-02 13:05:29.897597 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-02 13:05:29.898423 | orchestrator | 2025-06-02 13:05:29.900132 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-02 13:05:29.901123 | orchestrator | Monday 02 June 2025 13:05:29 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-02 13:05:30.040661 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:30.115078 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:30.197846 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:30.270232 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:30.432991 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:30.553030 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:30.553500 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:30.554758 | orchestrator | 2025-06-02 13:05:30.555646 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-02 13:05:30.558109 | orchestrator | Monday 02 June 2025 13:05:30 +0000 (0:00:00.655) 0:00:00.916 *********** 2025-06-02 13:05:31.709752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:05:31.709939 | orchestrator | 2025-06-02 13:05:31.713053 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-02 13:05:31.713084 | orchestrator | Monday 02 June 2025 13:05:31 +0000 (0:00:01.155) 0:00:02.072 *********** 2025-06-02 13:05:33.544682 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:33.545356 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:33.549182 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:33.549215 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:33.549228 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:33.550962 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:33.552201 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:33.553276 | orchestrator | 2025-06-02 13:05:33.554259 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-02 13:05:33.554805 | orchestrator | Monday 02 June 2025 13:05:33 +0000 (0:00:01.837) 0:00:03.909 *********** 2025-06-02 13:05:35.305448 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:35.306718 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:35.309248 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:35.312237 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:35.314140 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:35.321041 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:35.321079 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:35.322081 | orchestrator | 2025-06-02 13:05:35.324687 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-02 13:05:35.324703 | orchestrator | Monday 02 June 2025 13:05:35 +0000 (0:00:01.756) 0:00:05.666 *********** 2025-06-02 13:05:35.826939 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-02 13:05:35.827036 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-02 13:05:35.827272 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-02 13:05:36.293669 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-02 13:05:36.293773 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-02 13:05:36.295378 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-02 13:05:36.296053 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-02 13:05:36.296730 | orchestrator | 2025-06-02 13:05:36.297984 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-02 13:05:36.299067 | orchestrator | Monday 02 June 2025 13:05:36 +0000 (0:00:00.992) 0:00:06.659 *********** 2025-06-02 13:05:39.365591 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:05:39.365757 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:05:39.366457 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 13:05:39.367405 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:05:39.368355 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 13:05:39.368489 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 13:05:39.369177 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 13:05:39.369604 | orchestrator | 2025-06-02 13:05:39.370087 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-02 13:05:39.370677 | orchestrator | Monday 02 June 2025 13:05:39 +0000 (0:00:03.066) 0:00:09.725 *********** 2025-06-02 13:05:40.819281 | orchestrator | changed: [testbed-manager] 2025-06-02 13:05:40.819761 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:40.820461 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:40.822966 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:40.826321 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:40.826431 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:40.826454 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:40.826887 | orchestrator | 2025-06-02 13:05:40.827268 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-02 13:05:40.827994 | orchestrator | Monday 02 June 2025 13:05:40 +0000 (0:00:01.456) 0:00:11.182 *********** 2025-06-02 13:05:42.602807 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:05:42.604931 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:05:42.605461 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 13:05:42.607557 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:05:42.608182 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 13:05:42.609334 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 13:05:42.610078 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 13:05:42.611764 | orchestrator | 2025-06-02 13:05:42.611793 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-02 13:05:42.611807 | orchestrator | Monday 02 June 2025 13:05:42 +0000 (0:00:01.782) 0:00:12.965 *********** 2025-06-02 13:05:43.014248 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:43.271514 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:43.701094 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:43.702075 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:43.704774 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:43.704799 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:43.706109 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:43.706807 | orchestrator | 2025-06-02 13:05:43.707786 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-02 13:05:43.708455 | orchestrator | Monday 02 June 2025 13:05:43 +0000 (0:00:01.097) 0:00:14.062 *********** 2025-06-02 13:05:43.856232 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:43.935048 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:44.012835 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:44.092945 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:44.182542 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:44.325891 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:44.326383 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:44.327448 | orchestrator | 2025-06-02 13:05:44.330906 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-02 13:05:44.330919 | orchestrator | Monday 02 June 2025 13:05:44 +0000 (0:00:00.625) 0:00:14.688 *********** 2025-06-02 13:05:46.397383 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:46.400358 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:46.401291 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:46.403041 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:46.404164 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:46.405235 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:46.406432 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:46.406996 | orchestrator | 2025-06-02 13:05:46.407969 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-02 13:05:46.410771 | orchestrator | Monday 02 June 2025 13:05:46 +0000 (0:00:02.070) 0:00:16.759 *********** 2025-06-02 13:05:46.649831 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:46.733132 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:46.815443 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:46.896203 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:47.258254 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:47.258354 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:47.258732 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-02 13:05:47.259061 | orchestrator | 2025-06-02 13:05:47.259463 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-02 13:05:47.259786 | orchestrator | Monday 02 June 2025 13:05:47 +0000 (0:00:00.866) 0:00:17.626 *********** 2025-06-02 13:05:48.840815 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:48.841324 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:05:48.842538 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:05:48.843236 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:05:48.844139 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:05:48.846062 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:05:48.846668 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:05:48.847555 | orchestrator | 2025-06-02 13:05:48.848433 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-02 13:05:48.849153 | orchestrator | Monday 02 June 2025 13:05:48 +0000 (0:00:01.576) 0:00:19.203 *********** 2025-06-02 13:05:50.019266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:05:50.019755 | orchestrator | 2025-06-02 13:05:50.021035 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 13:05:50.022458 | orchestrator | Monday 02 June 2025 13:05:50 +0000 (0:00:01.178) 0:00:20.381 *********** 2025-06-02 13:05:50.546447 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:50.955332 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:50.956336 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:50.956954 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:50.958330 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:50.960126 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:50.962119 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:50.962969 | orchestrator | 2025-06-02 13:05:50.964104 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-02 13:05:50.965123 | orchestrator | Monday 02 June 2025 13:05:50 +0000 (0:00:00.938) 0:00:21.320 *********** 2025-06-02 13:05:51.267070 | orchestrator | ok: [testbed-manager] 2025-06-02 13:05:51.351191 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:05:51.430320 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:05:51.513584 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:05:51.592341 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:05:51.729752 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:05:51.731392 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:05:51.733294 | orchestrator | 2025-06-02 13:05:51.734793 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 13:05:51.735972 | orchestrator | Monday 02 June 2025 13:05:51 +0000 (0:00:00.774) 0:00:22.095 *********** 2025-06-02 13:05:52.157883 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:52.158342 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:52.247674 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:52.249707 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:52.855076 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:52.856893 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:52.857877 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:52.859239 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:52.859885 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:52.860972 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:52.861535 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:52.862448 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:52.863375 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 13:05:52.863892 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 13:05:52.864318 | orchestrator | 2025-06-02 13:05:52.864945 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-02 13:05:52.865583 | orchestrator | Monday 02 June 2025 13:05:52 +0000 (0:00:01.123) 0:00:23.219 *********** 2025-06-02 13:05:53.009031 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:05:53.083467 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:05:53.161159 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:05:53.237347 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:05:53.312933 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:05:53.426663 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:05:53.426858 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:05:53.427680 | orchestrator | 2025-06-02 13:05:53.429512 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-02 13:05:53.430313 | orchestrator | Monday 02 June 2025 13:05:53 +0000 (0:00:00.570) 0:00:23.789 *********** 2025-06-02 13:05:56.744083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-06-02 13:05:56.744358 | orchestrator | 2025-06-02 13:05:56.744852 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-02 13:05:56.745253 | orchestrator | Monday 02 June 2025 13:05:56 +0000 (0:00:03.319) 0:00:27.109 *********** 2025-06-02 13:06:01.123479 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:01.123750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:01.124272 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:01.124655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:01.127385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:01.130435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:01.134346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:01.134528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:01.135161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:01.135620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:01.136238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:01.136951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:01.140269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:01.140482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:01.144030 | orchestrator | 2025-06-02 13:06:01.144102 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-02 13:06:01.144473 | orchestrator | Monday 02 June 2025 13:06:01 +0000 (0:00:04.376) 0:00:31.485 *********** 2025-06-02 13:06:05.867401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:05.867518 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:05.867563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:05.867785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:05.868426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:05.871272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:05.872140 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:05.872530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:05.873814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 13:06:05.874104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:05.874819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:05.875673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:05.876032 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:05.876709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 13:06:05.877499 | orchestrator | 2025-06-02 13:06:05.877920 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-02 13:06:05.878817 | orchestrator | Monday 02 June 2025 13:06:05 +0000 (0:00:04.745) 0:00:36.230 *********** 2025-06-02 13:06:07.070903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:06:07.071310 | orchestrator | 2025-06-02 13:06:07.072484 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 13:06:07.074865 | orchestrator | Monday 02 June 2025 13:06:07 +0000 (0:00:01.202) 0:00:37.433 *********** 2025-06-02 13:06:07.520409 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:07.784860 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:08.219713 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:08.220110 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:08.220980 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:08.222277 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:08.224969 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:08.224996 | orchestrator | 2025-06-02 13:06:08.225008 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 13:06:08.225018 | orchestrator | Monday 02 June 2025 13:06:08 +0000 (0:00:01.152) 0:00:38.586 *********** 2025-06-02 13:06:08.332530 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:06:08.332818 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:06:08.333821 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:06:08.334475 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:06:08.426740 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:06:08.427145 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:06:08.428358 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:06:08.428815 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:06:08.432131 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:06:08.525292 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:08.525765 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:06:08.526692 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:06:08.530161 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:06:08.530673 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:06:08.614461 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:08.614696 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:06:08.615658 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:06:08.616128 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:06:08.616977 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:06:08.702570 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:08.702722 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:06:08.703797 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:06:08.707041 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:06:08.707066 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:06:08.796740 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:06:08.797922 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:06:08.798377 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:06:08.799319 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:06:10.175465 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:10.177991 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:10.178084 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 13:06:10.179239 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 13:06:10.180130 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 13:06:10.181012 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 13:06:10.181791 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:10.182510 | orchestrator | 2025-06-02 13:06:10.183330 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-02 13:06:10.184228 | orchestrator | Monday 02 June 2025 13:06:10 +0000 (0:00:01.951) 0:00:40.537 *********** 2025-06-02 13:06:10.327782 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:06:10.405928 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:10.522301 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:10.602317 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:10.683760 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:10.801491 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:10.802763 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:10.806170 | orchestrator | 2025-06-02 13:06:10.806226 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-02 13:06:10.806240 | orchestrator | Monday 02 June 2025 13:06:10 +0000 (0:00:00.627) 0:00:41.165 *********** 2025-06-02 13:06:10.947296 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:06:11.022917 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:11.252390 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:11.331410 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:11.410170 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:11.452703 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:11.452813 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:11.453059 | orchestrator | 2025-06-02 13:06:11.454228 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:06:11.454272 | orchestrator | 2025-06-02 13:06:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:06:11.454288 | orchestrator | 2025-06-02 13:06:11 | INFO  | Please wait and do not abort execution. 2025-06-02 13:06:11.454479 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:06:11.455173 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:06:11.455271 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:06:11.455910 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:06:11.456296 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:06:11.456610 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:06:11.457035 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:06:11.457361 | orchestrator | 2025-06-02 13:06:11.457864 | orchestrator | 2025-06-02 13:06:11.458214 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:06:11.458723 | orchestrator | Monday 02 June 2025 13:06:11 +0000 (0:00:00.654) 0:00:41.819 *********** 2025-06-02 13:06:11.459357 | orchestrator | =============================================================================== 2025-06-02 13:06:11.459685 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.75s 2025-06-02 13:06:11.460261 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.38s 2025-06-02 13:06:11.460726 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.32s 2025-06-02 13:06:11.461079 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.07s 2025-06-02 13:06:11.461441 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.07s 2025-06-02 13:06:11.461782 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.95s 2025-06-02 13:06:11.462097 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.84s 2025-06-02 13:06:11.462409 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.78s 2025-06-02 13:06:11.462731 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.76s 2025-06-02 13:06:11.463138 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.58s 2025-06-02 13:06:11.463428 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.46s 2025-06-02 13:06:11.463775 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.20s 2025-06-02 13:06:11.464219 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.18s 2025-06-02 13:06:11.464399 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2025-06-02 13:06:11.464916 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.15s 2025-06-02 13:06:11.465184 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.12s 2025-06-02 13:06:11.465479 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.10s 2025-06-02 13:06:11.465785 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-06-02 13:06:11.466117 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-06-02 13:06:11.466320 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2025-06-02 13:06:11.987471 | orchestrator | + osism apply wireguard 2025-06-02 13:06:13.613767 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:06:13.613870 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:06:13.613885 | orchestrator | Registering Redlock._release_script 2025-06-02 13:06:13.669649 | orchestrator | 2025-06-02 13:06:13 | INFO  | Task b10c0822-3c91-4293-bbdd-a5c89f907371 (wireguard) was prepared for execution. 2025-06-02 13:06:13.669732 | orchestrator | 2025-06-02 13:06:13 | INFO  | It takes a moment until task b10c0822-3c91-4293-bbdd-a5c89f907371 (wireguard) has been started and output is visible here. 2025-06-02 13:06:17.549272 | orchestrator | 2025-06-02 13:06:17.549508 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-02 13:06:17.551696 | orchestrator | 2025-06-02 13:06:17.553080 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-02 13:06:17.553524 | orchestrator | Monday 02 June 2025 13:06:17 +0000 (0:00:00.219) 0:00:00.219 *********** 2025-06-02 13:06:18.994262 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:18.994714 | orchestrator | 2025-06-02 13:06:18.995232 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-02 13:06:18.995960 | orchestrator | Monday 02 June 2025 13:06:18 +0000 (0:00:01.446) 0:00:01.665 *********** 2025-06-02 13:06:24.973502 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:24.975346 | orchestrator | 2025-06-02 13:06:24.975399 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-02 13:06:24.976199 | orchestrator | Monday 02 June 2025 13:06:24 +0000 (0:00:05.980) 0:00:07.645 *********** 2025-06-02 13:06:25.500017 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:25.500847 | orchestrator | 2025-06-02 13:06:25.501721 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-02 13:06:25.503485 | orchestrator | Monday 02 June 2025 13:06:25 +0000 (0:00:00.527) 0:00:08.173 *********** 2025-06-02 13:06:25.899992 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:25.900313 | orchestrator | 2025-06-02 13:06:25.901516 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-02 13:06:25.902133 | orchestrator | Monday 02 June 2025 13:06:25 +0000 (0:00:00.399) 0:00:08.573 *********** 2025-06-02 13:06:26.406343 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:26.407176 | orchestrator | 2025-06-02 13:06:26.407957 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-02 13:06:26.409100 | orchestrator | Monday 02 June 2025 13:06:26 +0000 (0:00:00.506) 0:00:09.079 *********** 2025-06-02 13:06:26.895560 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:26.896008 | orchestrator | 2025-06-02 13:06:26.897188 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-02 13:06:26.897644 | orchestrator | Monday 02 June 2025 13:06:26 +0000 (0:00:00.489) 0:00:09.569 *********** 2025-06-02 13:06:27.307075 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:27.307788 | orchestrator | 2025-06-02 13:06:27.307847 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-02 13:06:27.308454 | orchestrator | Monday 02 June 2025 13:06:27 +0000 (0:00:00.409) 0:00:09.978 *********** 2025-06-02 13:06:28.458884 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:28.459148 | orchestrator | 2025-06-02 13:06:28.460137 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-02 13:06:28.460790 | orchestrator | Monday 02 June 2025 13:06:28 +0000 (0:00:01.152) 0:00:11.131 *********** 2025-06-02 13:06:29.376081 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 13:06:29.376273 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:29.376641 | orchestrator | 2025-06-02 13:06:29.377643 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-02 13:06:29.378555 | orchestrator | Monday 02 June 2025 13:06:29 +0000 (0:00:00.916) 0:00:12.047 *********** 2025-06-02 13:06:30.967975 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:30.969042 | orchestrator | 2025-06-02 13:06:30.970153 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-02 13:06:30.971071 | orchestrator | Monday 02 June 2025 13:06:30 +0000 (0:00:01.592) 0:00:13.640 *********** 2025-06-02 13:06:31.878432 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:31.878652 | orchestrator | 2025-06-02 13:06:31.880450 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:06:31.880936 | orchestrator | 2025-06-02 13:06:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:06:31.881228 | orchestrator | 2025-06-02 13:06:31 | INFO  | Please wait and do not abort execution. 2025-06-02 13:06:31.882353 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:06:31.883206 | orchestrator | 2025-06-02 13:06:31.883925 | orchestrator | 2025-06-02 13:06:31.884395 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:06:31.885006 | orchestrator | Monday 02 June 2025 13:06:31 +0000 (0:00:00.910) 0:00:14.550 *********** 2025-06-02 13:06:31.885726 | orchestrator | =============================================================================== 2025-06-02 13:06:31.886331 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.98s 2025-06-02 13:06:31.886914 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.59s 2025-06-02 13:06:31.887498 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.45s 2025-06-02 13:06:31.887988 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2025-06-02 13:06:31.888484 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.92s 2025-06-02 13:06:31.889126 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2025-06-02 13:06:31.889528 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-06-02 13:06:31.890111 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-06-02 13:06:31.890486 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.49s 2025-06-02 13:06:31.890985 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-06-02 13:06:31.891440 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-06-02 13:06:32.408261 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-02 13:06:32.446803 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-02 13:06:32.446861 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-02 13:06:32.526409 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 188 0 --:--:-- --:--:-- --:--:-- 189 2025-06-02 13:06:32.539938 | orchestrator | + osism apply --environment custom workarounds 2025-06-02 13:06:34.216294 | orchestrator | 2025-06-02 13:06:34 | INFO  | Trying to run play workarounds in environment custom 2025-06-02 13:06:34.221109 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:06:34.221174 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:06:34.221188 | orchestrator | Registering Redlock._release_script 2025-06-02 13:06:34.295016 | orchestrator | 2025-06-02 13:06:34 | INFO  | Task 04d705dd-e939-473b-a0de-70710cfdd57b (workarounds) was prepared for execution. 2025-06-02 13:06:34.295108 | orchestrator | 2025-06-02 13:06:34 | INFO  | It takes a moment until task 04d705dd-e939-473b-a0de-70710cfdd57b (workarounds) has been started and output is visible here. 2025-06-02 13:06:38.192131 | orchestrator | 2025-06-02 13:06:38.193567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:06:38.193696 | orchestrator | 2025-06-02 13:06:38.194434 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-02 13:06:38.196427 | orchestrator | Monday 02 June 2025 13:06:38 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-02 13:06:38.355394 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-02 13:06:38.435509 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-02 13:06:38.516467 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-02 13:06:38.597915 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-02 13:06:38.783787 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-02 13:06:38.919919 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-02 13:06:38.920069 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-02 13:06:38.920923 | orchestrator | 2025-06-02 13:06:38.922615 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-02 13:06:38.923611 | orchestrator | 2025-06-02 13:06:38.923977 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 13:06:38.924910 | orchestrator | Monday 02 June 2025 13:06:38 +0000 (0:00:00.731) 0:00:00.875 *********** 2025-06-02 13:06:40.857751 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:40.860992 | orchestrator | 2025-06-02 13:06:40.861025 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-02 13:06:40.861040 | orchestrator | 2025-06-02 13:06:40.861254 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 13:06:40.862160 | orchestrator | Monday 02 June 2025 13:06:40 +0000 (0:00:01.934) 0:00:02.810 *********** 2025-06-02 13:06:42.652531 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:42.657622 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:42.657963 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:42.659027 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:42.661904 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:42.663707 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:42.664323 | orchestrator | 2025-06-02 13:06:42.666862 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-02 13:06:42.667305 | orchestrator | 2025-06-02 13:06:42.669675 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-02 13:06:42.670394 | orchestrator | Monday 02 June 2025 13:06:42 +0000 (0:00:01.792) 0:00:04.603 *********** 2025-06-02 13:06:44.148891 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:44.149121 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:44.150603 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:44.152499 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:44.153524 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:44.153943 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 13:06:44.155475 | orchestrator | 2025-06-02 13:06:44.156448 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-02 13:06:44.157620 | orchestrator | Monday 02 June 2025 13:06:44 +0000 (0:00:01.495) 0:00:06.099 *********** 2025-06-02 13:06:47.751546 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:47.751809 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:47.754092 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:47.754234 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:47.755061 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:47.756254 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:47.756758 | orchestrator | 2025-06-02 13:06:47.757738 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-02 13:06:47.758103 | orchestrator | Monday 02 June 2025 13:06:47 +0000 (0:00:03.605) 0:00:09.704 *********** 2025-06-02 13:06:47.900230 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:48.003174 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:48.082238 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:48.158882 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:48.461018 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:48.462108 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:48.463481 | orchestrator | 2025-06-02 13:06:48.464924 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-02 13:06:48.465908 | orchestrator | 2025-06-02 13:06:48.466664 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-02 13:06:48.467637 | orchestrator | Monday 02 June 2025 13:06:48 +0000 (0:00:00.709) 0:00:10.414 *********** 2025-06-02 13:06:50.076808 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:50.078883 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:50.080233 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:50.081535 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:50.082519 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:50.083298 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:50.084118 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:50.086136 | orchestrator | 2025-06-02 13:06:50.086197 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-02 13:06:50.086212 | orchestrator | Monday 02 June 2025 13:06:50 +0000 (0:00:01.614) 0:00:12.028 *********** 2025-06-02 13:06:51.668177 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:51.668395 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:51.668938 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:51.672118 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:51.672163 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:51.672182 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:51.672718 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:51.673377 | orchestrator | 2025-06-02 13:06:51.674086 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-02 13:06:51.675345 | orchestrator | Monday 02 June 2025 13:06:51 +0000 (0:00:01.589) 0:00:13.618 *********** 2025-06-02 13:06:53.145765 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:53.146253 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:53.183796 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:53.183848 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:53.183860 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:53.183872 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:53.183882 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:53.183894 | orchestrator | 2025-06-02 13:06:53.183906 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-02 13:06:53.183919 | orchestrator | Monday 02 June 2025 13:06:53 +0000 (0:00:01.481) 0:00:15.099 *********** 2025-06-02 13:06:54.855281 | orchestrator | changed: [testbed-manager] 2025-06-02 13:06:54.857425 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:06:54.858470 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:06:54.859349 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:06:54.860433 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:06:54.861304 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:06:54.862585 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:06:54.863297 | orchestrator | 2025-06-02 13:06:54.864067 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-02 13:06:54.864928 | orchestrator | Monday 02 June 2025 13:06:54 +0000 (0:00:01.707) 0:00:16.806 *********** 2025-06-02 13:06:55.019970 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:06:55.100992 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:06:55.176088 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:06:55.247735 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:06:55.322099 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:06:55.440102 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:06:55.440808 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:06:55.441513 | orchestrator | 2025-06-02 13:06:55.442614 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-02 13:06:55.443172 | orchestrator | 2025-06-02 13:06:55.444195 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-02 13:06:55.444950 | orchestrator | Monday 02 June 2025 13:06:55 +0000 (0:00:00.589) 0:00:17.396 *********** 2025-06-02 13:06:58.108211 | orchestrator | ok: [testbed-manager] 2025-06-02 13:06:58.108318 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:06:58.108333 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:06:58.108403 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:06:58.109181 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:06:58.109397 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:06:58.110251 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:06:58.110630 | orchestrator | 2025-06-02 13:06:58.111824 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:06:58.112284 | orchestrator | 2025-06-02 13:06:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:06:58.112401 | orchestrator | 2025-06-02 13:06:58 | INFO  | Please wait and do not abort execution. 2025-06-02 13:06:58.113751 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:06:58.114379 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:58.115171 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:58.116059 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:58.116687 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:58.117769 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:58.119180 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:06:58.119666 | orchestrator | 2025-06-02 13:06:58.120640 | orchestrator | 2025-06-02 13:06:58.120906 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:06:58.122090 | orchestrator | Monday 02 June 2025 13:06:58 +0000 (0:00:02.662) 0:00:20.059 *********** 2025-06-02 13:06:58.122798 | orchestrator | =============================================================================== 2025-06-02 13:06:58.123281 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.61s 2025-06-02 13:06:58.123765 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2025-06-02 13:06:58.124532 | orchestrator | Apply netplan configuration --------------------------------------------- 1.93s 2025-06-02 13:06:58.125305 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2025-06-02 13:06:58.125900 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2025-06-02 13:06:58.126376 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2025-06-02 13:06:58.126922 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2025-06-02 13:06:58.127354 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2025-06-02 13:06:58.128601 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-06-02 13:06:58.128795 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2025-06-02 13:06:58.129103 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-06-02 13:06:58.129304 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2025-06-02 13:06:58.609227 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-02 13:07:00.275701 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:07:00.275801 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:07:00.275815 | orchestrator | Registering Redlock._release_script 2025-06-02 13:07:00.333880 | orchestrator | 2025-06-02 13:07:00 | INFO  | Task 1ddd9839-f70f-411b-87ca-3305f4a3866c (reboot) was prepared for execution. 2025-06-02 13:07:00.333975 | orchestrator | 2025-06-02 13:07:00 | INFO  | It takes a moment until task 1ddd9839-f70f-411b-87ca-3305f4a3866c (reboot) has been started and output is visible here. 2025-06-02 13:07:04.241310 | orchestrator | 2025-06-02 13:07:04.241465 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:07:04.242335 | orchestrator | 2025-06-02 13:07:04.243481 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:07:04.244689 | orchestrator | Monday 02 June 2025 13:07:04 +0000 (0:00:00.180) 0:00:00.180 *********** 2025-06-02 13:07:04.318225 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:07:04.319087 | orchestrator | 2025-06-02 13:07:04.319497 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:07:04.322606 | orchestrator | Monday 02 June 2025 13:07:04 +0000 (0:00:00.079) 0:00:00.260 *********** 2025-06-02 13:07:05.163537 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:07:05.164385 | orchestrator | 2025-06-02 13:07:05.165337 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:07:05.165946 | orchestrator | Monday 02 June 2025 13:07:05 +0000 (0:00:00.845) 0:00:01.105 *********** 2025-06-02 13:07:05.276389 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:07:05.276695 | orchestrator | 2025-06-02 13:07:05.278349 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:07:05.278711 | orchestrator | 2025-06-02 13:07:05.282839 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:07:05.282911 | orchestrator | Monday 02 June 2025 13:07:05 +0000 (0:00:00.111) 0:00:01.216 *********** 2025-06-02 13:07:05.369288 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:07:05.369375 | orchestrator | 2025-06-02 13:07:05.369627 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:07:05.370285 | orchestrator | Monday 02 June 2025 13:07:05 +0000 (0:00:00.093) 0:00:01.309 *********** 2025-06-02 13:07:06.024888 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:07:06.025353 | orchestrator | 2025-06-02 13:07:06.026069 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:07:06.027100 | orchestrator | Monday 02 June 2025 13:07:06 +0000 (0:00:00.657) 0:00:01.966 *********** 2025-06-02 13:07:06.116183 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:07:06.116460 | orchestrator | 2025-06-02 13:07:06.117318 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:07:06.117774 | orchestrator | 2025-06-02 13:07:06.118431 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:07:06.119163 | orchestrator | Monday 02 June 2025 13:07:06 +0000 (0:00:00.090) 0:00:02.057 *********** 2025-06-02 13:07:06.273005 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:07:06.274649 | orchestrator | 2025-06-02 13:07:06.275705 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:07:06.276551 | orchestrator | Monday 02 June 2025 13:07:06 +0000 (0:00:00.157) 0:00:02.214 *********** 2025-06-02 13:07:06.917954 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:07:06.919215 | orchestrator | 2025-06-02 13:07:06.919755 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:07:06.920061 | orchestrator | Monday 02 June 2025 13:07:06 +0000 (0:00:00.645) 0:00:02.860 *********** 2025-06-02 13:07:07.011706 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:07:07.013335 | orchestrator | 2025-06-02 13:07:07.014492 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:07:07.015903 | orchestrator | 2025-06-02 13:07:07.016211 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:07:07.017100 | orchestrator | Monday 02 June 2025 13:07:07 +0000 (0:00:00.093) 0:00:02.953 *********** 2025-06-02 13:07:07.094950 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:07:07.096544 | orchestrator | 2025-06-02 13:07:07.097701 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:07:07.098311 | orchestrator | Monday 02 June 2025 13:07:07 +0000 (0:00:00.083) 0:00:03.037 *********** 2025-06-02 13:07:07.741458 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:07:07.741682 | orchestrator | 2025-06-02 13:07:07.742376 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:07:07.743282 | orchestrator | Monday 02 June 2025 13:07:07 +0000 (0:00:00.645) 0:00:03.682 *********** 2025-06-02 13:07:07.838109 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:07:07.838643 | orchestrator | 2025-06-02 13:07:07.839373 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:07:07.839994 | orchestrator | 2025-06-02 13:07:07.840757 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:07:07.841480 | orchestrator | Monday 02 June 2025 13:07:07 +0000 (0:00:00.096) 0:00:03.778 *********** 2025-06-02 13:07:07.923819 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:07:07.924270 | orchestrator | 2025-06-02 13:07:07.925009 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:07:07.925846 | orchestrator | Monday 02 June 2025 13:07:07 +0000 (0:00:00.087) 0:00:03.866 *********** 2025-06-02 13:07:08.519681 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:07:08.520122 | orchestrator | 2025-06-02 13:07:08.521489 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:07:08.522422 | orchestrator | Monday 02 June 2025 13:07:08 +0000 (0:00:00.595) 0:00:04.461 *********** 2025-06-02 13:07:08.615707 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:07:08.615825 | orchestrator | 2025-06-02 13:07:08.616828 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 13:07:08.617658 | orchestrator | 2025-06-02 13:07:08.618625 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 13:07:08.619460 | orchestrator | Monday 02 June 2025 13:07:08 +0000 (0:00:00.094) 0:00:04.556 *********** 2025-06-02 13:07:08.699600 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:07:08.699816 | orchestrator | 2025-06-02 13:07:08.700776 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 13:07:08.701626 | orchestrator | Monday 02 June 2025 13:07:08 +0000 (0:00:00.084) 0:00:04.641 *********** 2025-06-02 13:07:09.360349 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:07:09.360862 | orchestrator | 2025-06-02 13:07:09.361452 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 13:07:09.363424 | orchestrator | Monday 02 June 2025 13:07:09 +0000 (0:00:00.660) 0:00:05.301 *********** 2025-06-02 13:07:09.393315 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:07:09.394380 | orchestrator | 2025-06-02 13:07:09.395379 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:07:09.396134 | orchestrator | 2025-06-02 13:07:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:07:09.396165 | orchestrator | 2025-06-02 13:07:09 | INFO  | Please wait and do not abort execution. 2025-06-02 13:07:09.396893 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:07:09.397658 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:07:09.398645 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:07:09.399601 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:07:09.400199 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:07:09.400784 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:07:09.401063 | orchestrator | 2025-06-02 13:07:09.401436 | orchestrator | 2025-06-02 13:07:09.402065 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:07:09.402544 | orchestrator | Monday 02 June 2025 13:07:09 +0000 (0:00:00.034) 0:00:05.336 *********** 2025-06-02 13:07:09.402848 | orchestrator | =============================================================================== 2025-06-02 13:07:09.403599 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.05s 2025-06-02 13:07:09.404211 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2025-06-02 13:07:09.404339 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.52s 2025-06-02 13:07:09.903849 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-02 13:07:11.527036 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:07:11.527145 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:07:11.527185 | orchestrator | Registering Redlock._release_script 2025-06-02 13:07:11.593654 | orchestrator | 2025-06-02 13:07:11 | INFO  | Task 2bb4861c-afaa-4eff-a201-1c64cd863787 (wait-for-connection) was prepared for execution. 2025-06-02 13:07:11.593751 | orchestrator | 2025-06-02 13:07:11 | INFO  | It takes a moment until task 2bb4861c-afaa-4eff-a201-1c64cd863787 (wait-for-connection) has been started and output is visible here. 2025-06-02 13:07:15.561239 | orchestrator | 2025-06-02 13:07:15.561456 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-02 13:07:15.561896 | orchestrator | 2025-06-02 13:07:15.564938 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-02 13:07:15.564986 | orchestrator | Monday 02 June 2025 13:07:15 +0000 (0:00:00.231) 0:00:00.231 *********** 2025-06-02 13:07:28.250890 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:07:28.251020 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:07:28.251036 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:07:28.251048 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:07:28.252206 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:07:28.253992 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:07:28.254932 | orchestrator | 2025-06-02 13:07:28.255682 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:07:28.256268 | orchestrator | 2025-06-02 13:07:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:07:28.256341 | orchestrator | 2025-06-02 13:07:28 | INFO  | Please wait and do not abort execution. 2025-06-02 13:07:28.257348 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:28.258159 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:28.258229 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:28.258814 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:28.259645 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:28.259827 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:28.260166 | orchestrator | 2025-06-02 13:07:28.260632 | orchestrator | 2025-06-02 13:07:28.261179 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:07:28.261498 | orchestrator | Monday 02 June 2025 13:07:28 +0000 (0:00:12.689) 0:00:12.921 *********** 2025-06-02 13:07:28.261834 | orchestrator | =============================================================================== 2025-06-02 13:07:28.262287 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.69s 2025-06-02 13:07:28.793118 | orchestrator | + osism apply hddtemp 2025-06-02 13:07:30.436659 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:07:30.436765 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:07:30.436781 | orchestrator | Registering Redlock._release_script 2025-06-02 13:07:30.494271 | orchestrator | 2025-06-02 13:07:30 | INFO  | Task 0b1fce1e-adc9-48df-a165-05049327eff3 (hddtemp) was prepared for execution. 2025-06-02 13:07:30.494368 | orchestrator | 2025-06-02 13:07:30 | INFO  | It takes a moment until task 0b1fce1e-adc9-48df-a165-05049327eff3 (hddtemp) has been started and output is visible here. 2025-06-02 13:07:34.487119 | orchestrator | 2025-06-02 13:07:34.487387 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-02 13:07:34.489791 | orchestrator | 2025-06-02 13:07:34.489829 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-02 13:07:34.493003 | orchestrator | Monday 02 June 2025 13:07:34 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-02 13:07:34.634398 | orchestrator | ok: [testbed-manager] 2025-06-02 13:07:34.710112 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:07:34.785077 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:07:34.859092 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:07:35.025239 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:07:35.156719 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:07:35.157058 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:07:35.158287 | orchestrator | 2025-06-02 13:07:35.161919 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-02 13:07:35.161947 | orchestrator | Monday 02 June 2025 13:07:35 +0000 (0:00:00.672) 0:00:00.933 *********** 2025-06-02 13:07:36.339188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:07:36.339941 | orchestrator | 2025-06-02 13:07:36.340781 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-02 13:07:36.342789 | orchestrator | Monday 02 June 2025 13:07:36 +0000 (0:00:01.181) 0:00:02.114 *********** 2025-06-02 13:07:38.292986 | orchestrator | ok: [testbed-manager] 2025-06-02 13:07:38.293111 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:07:38.293878 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:07:38.293906 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:07:38.294402 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:07:38.295524 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:07:38.295681 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:07:38.296174 | orchestrator | 2025-06-02 13:07:38.297845 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-02 13:07:38.300137 | orchestrator | Monday 02 June 2025 13:07:38 +0000 (0:00:01.956) 0:00:04.070 *********** 2025-06-02 13:07:38.958401 | orchestrator | changed: [testbed-manager] 2025-06-02 13:07:39.044046 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:07:39.489253 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:07:39.489491 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:07:39.491358 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:07:39.493056 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:07:39.493093 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:07:39.493721 | orchestrator | 2025-06-02 13:07:39.494749 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-02 13:07:39.496774 | orchestrator | Monday 02 June 2025 13:07:39 +0000 (0:00:01.192) 0:00:05.263 *********** 2025-06-02 13:07:40.586006 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:07:40.586190 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:07:40.586603 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:07:40.587175 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:07:40.587730 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:07:40.590987 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:07:40.591016 | orchestrator | ok: [testbed-manager] 2025-06-02 13:07:40.591029 | orchestrator | 2025-06-02 13:07:40.591042 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-02 13:07:40.591056 | orchestrator | Monday 02 June 2025 13:07:40 +0000 (0:00:01.101) 0:00:06.365 *********** 2025-06-02 13:07:41.016567 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:07:41.109967 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:07:41.194978 | orchestrator | changed: [testbed-manager] 2025-06-02 13:07:41.277503 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:07:41.391231 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:07:41.391741 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:07:41.396181 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:07:41.396231 | orchestrator | 2025-06-02 13:07:41.396245 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-02 13:07:41.396258 | orchestrator | Monday 02 June 2025 13:07:41 +0000 (0:00:00.802) 0:00:07.167 *********** 2025-06-02 13:07:53.306508 | orchestrator | changed: [testbed-manager] 2025-06-02 13:07:53.306778 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:07:53.306795 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:07:53.306806 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:07:53.306816 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:07:53.306826 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:07:53.306942 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:07:53.308637 | orchestrator | 2025-06-02 13:07:53.309002 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-02 13:07:53.309676 | orchestrator | Monday 02 June 2025 13:07:53 +0000 (0:00:11.913) 0:00:19.080 *********** 2025-06-02 13:07:54.670301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:07:54.670647 | orchestrator | 2025-06-02 13:07:54.673801 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-02 13:07:54.673840 | orchestrator | Monday 02 June 2025 13:07:54 +0000 (0:00:01.365) 0:00:20.446 *********** 2025-06-02 13:07:56.340449 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:07:56.341741 | orchestrator | changed: [testbed-manager] 2025-06-02 13:07:56.342760 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:07:56.343483 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:07:56.344747 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:07:56.346091 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:07:56.347050 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:07:56.347639 | orchestrator | 2025-06-02 13:07:56.349650 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:07:56.349695 | orchestrator | 2025-06-02 13:07:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:07:56.349710 | orchestrator | 2025-06-02 13:07:56 | INFO  | Please wait and do not abort execution. 2025-06-02 13:07:56.350200 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:07:56.350911 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:56.351351 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:56.352159 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:56.352470 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:56.353064 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:56.353571 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:07:56.354346 | orchestrator | 2025-06-02 13:07:56.354676 | orchestrator | 2025-06-02 13:07:56.355136 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:07:56.355641 | orchestrator | Monday 02 June 2025 13:07:56 +0000 (0:00:01.674) 0:00:22.120 *********** 2025-06-02 13:07:56.356053 | orchestrator | =============================================================================== 2025-06-02 13:07:56.356629 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.91s 2025-06-02 13:07:56.357190 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.96s 2025-06-02 13:07:56.358134 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.67s 2025-06-02 13:07:56.358681 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.37s 2025-06-02 13:07:56.359305 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-06-02 13:07:56.359913 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2025-06-02 13:07:56.360630 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2025-06-02 13:07:56.361330 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2025-06-02 13:07:56.361869 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.67s 2025-06-02 13:07:56.723098 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-06-02 13:07:58.162621 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 13:07:58.162706 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 13:07:58.162722 | orchestrator | + local max_attempts=60 2025-06-02 13:07:58.162735 | orchestrator | + local name=ceph-ansible 2025-06-02 13:07:58.162746 | orchestrator | + local attempt_num=1 2025-06-02 13:07:58.163005 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 13:07:58.199826 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 13:07:58.199885 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 13:07:58.199897 | orchestrator | + local max_attempts=60 2025-06-02 13:07:58.199909 | orchestrator | + local name=kolla-ansible 2025-06-02 13:07:58.199920 | orchestrator | + local attempt_num=1 2025-06-02 13:07:58.200568 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 13:07:58.230703 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 13:07:58.230751 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 13:07:58.230764 | orchestrator | + local max_attempts=60 2025-06-02 13:07:58.230776 | orchestrator | + local name=osism-ansible 2025-06-02 13:07:58.230788 | orchestrator | + local attempt_num=1 2025-06-02 13:07:58.231263 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 13:07:58.259198 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 13:07:58.259254 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 13:07:58.259269 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 13:07:58.382196 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-02 13:07:58.498861 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-02 13:07:58.632736 | orchestrator | ARA in osism-ansible already disabled. 2025-06-02 13:07:58.777458 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-02 13:07:58.777850 | orchestrator | + osism apply gather-facts 2025-06-02 13:07:59.988294 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:07:59.988380 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:07:59.988395 | orchestrator | Registering Redlock._release_script 2025-06-02 13:08:00.031166 | orchestrator | 2025-06-02 13:08:00 | INFO  | Task 6472a33c-d604-48c4-b753-a39c2ff142ea (gather-facts) was prepared for execution. 2025-06-02 13:08:00.031228 | orchestrator | 2025-06-02 13:08:00 | INFO  | It takes a moment until task 6472a33c-d604-48c4-b753-a39c2ff142ea (gather-facts) has been started and output is visible here. 2025-06-02 13:08:02.935449 | orchestrator | 2025-06-02 13:08:02.936111 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 13:08:02.936955 | orchestrator | 2025-06-02 13:08:02.938695 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 13:08:02.940930 | orchestrator | Monday 02 June 2025 13:08:02 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-06-02 13:08:07.737648 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:08:07.738604 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:08:07.739046 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:08:07.740247 | orchestrator | ok: [testbed-manager] 2025-06-02 13:08:07.742426 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:08:07.742458 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:08:07.742471 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:08:07.743039 | orchestrator | 2025-06-02 13:08:07.743613 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 13:08:07.744245 | orchestrator | 2025-06-02 13:08:07.744929 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 13:08:07.745591 | orchestrator | Monday 02 June 2025 13:08:07 +0000 (0:00:04.803) 0:00:04.965 *********** 2025-06-02 13:08:07.891815 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:08:07.975659 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:08:08.054776 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:08:08.131168 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:08:08.202206 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:08:08.231611 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:08:08.232600 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:08:08.233751 | orchestrator | 2025-06-02 13:08:08.234788 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:08:08.235064 | orchestrator | 2025-06-02 13:08:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:08:08.235647 | orchestrator | 2025-06-02 13:08:08 | INFO  | Please wait and do not abort execution. 2025-06-02 13:08:08.237043 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:08:08.237698 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:08:08.238453 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:08:08.238980 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:08:08.239633 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:08:08.240124 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:08:08.240619 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:08:08.241086 | orchestrator | 2025-06-02 13:08:08.241613 | orchestrator | 2025-06-02 13:08:08.242168 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:08:08.242630 | orchestrator | Monday 02 June 2025 13:08:08 +0000 (0:00:00.495) 0:00:05.461 *********** 2025-06-02 13:08:08.243212 | orchestrator | =============================================================================== 2025-06-02 13:08:08.243615 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.80s 2025-06-02 13:08:08.244268 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-06-02 13:08:08.807068 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-02 13:08:08.819398 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-02 13:08:08.837078 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-02 13:08:08.846709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-02 13:08:08.865632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-02 13:08:08.890127 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-02 13:08:08.907639 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-02 13:08:08.929389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-02 13:08:08.947717 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-02 13:08:08.967131 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-02 13:08:08.987415 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-02 13:08:09.007209 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-02 13:08:09.024235 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-02 13:08:09.037644 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-02 13:08:09.050844 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-02 13:08:09.068480 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-02 13:08:09.081758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-02 13:08:09.098862 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-02 13:08:09.113036 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-02 13:08:09.129386 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-02 13:08:09.145070 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-02 13:08:09.584181 | orchestrator | ok: Runtime: 0:18:06.649765 2025-06-02 13:08:09.693163 | 2025-06-02 13:08:09.693313 | TASK [Deploy services] 2025-06-02 13:08:10.224492 | orchestrator | skipping: Conditional result was False 2025-06-02 13:08:10.243332 | 2025-06-02 13:08:10.243537 | TASK [Deploy in a nutshell] 2025-06-02 13:08:10.929510 | orchestrator | + set -e 2025-06-02 13:08:10.929781 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 13:08:10.929805 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 13:08:10.929824 | orchestrator | ++ INTERACTIVE=false 2025-06-02 13:08:10.929836 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 13:08:10.929847 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 13:08:10.929860 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 13:08:10.929901 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 13:08:10.929926 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 13:08:10.929939 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 13:08:10.929952 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 13:08:10.929963 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 13:08:10.929979 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 13:08:10.929988 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 13:08:10.930006 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 13:08:10.930063 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 13:08:10.930080 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 13:08:10.930091 | orchestrator | ++ export ARA=false 2025-06-02 13:08:10.930102 | orchestrator | ++ ARA=false 2025-06-02 13:08:10.930113 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 13:08:10.930125 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 13:08:10.930135 | orchestrator | ++ export TEMPEST=false 2025-06-02 13:08:10.930146 | orchestrator | ++ TEMPEST=false 2025-06-02 13:08:10.930157 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 13:08:10.930168 | orchestrator | ++ IS_ZUUL=true 2025-06-02 13:08:10.930195 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 13:08:10.930206 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 13:08:10.930217 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 13:08:10.930228 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 13:08:10.930238 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 13:08:10.930249 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 13:08:10.930260 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 13:08:10.930270 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 13:08:10.930282 | orchestrator | 2025-06-02 13:08:10.930293 | orchestrator | # PULL IMAGES 2025-06-02 13:08:10.930304 | orchestrator | 2025-06-02 13:08:10.930315 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 13:08:10.930333 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 13:08:10.930344 | orchestrator | + echo 2025-06-02 13:08:10.930355 | orchestrator | + echo '# PULL IMAGES' 2025-06-02 13:08:10.930366 | orchestrator | + echo 2025-06-02 13:08:10.931156 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-02 13:08:10.991193 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-02 13:08:10.991300 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-02 13:08:12.631149 | orchestrator | 2025-06-02 13:08:12 | INFO  | Trying to run play pull-images in environment custom 2025-06-02 13:08:12.635912 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:08:12.635944 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:08:12.635956 | orchestrator | Registering Redlock._release_script 2025-06-02 13:08:12.694213 | orchestrator | 2025-06-02 13:08:12 | INFO  | Task 6a6ff191-2645-44bd-ac47-7108bf180d19 (pull-images) was prepared for execution. 2025-06-02 13:08:12.694313 | orchestrator | 2025-06-02 13:08:12 | INFO  | It takes a moment until task 6a6ff191-2645-44bd-ac47-7108bf180d19 (pull-images) has been started and output is visible here. 2025-06-02 13:08:16.371061 | orchestrator | 2025-06-02 13:08:16.371163 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-02 13:08:16.371204 | orchestrator | 2025-06-02 13:08:16.371772 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-02 13:08:16.373119 | orchestrator | Monday 02 June 2025 13:08:16 +0000 (0:00:00.108) 0:00:00.108 *********** 2025-06-02 13:09:21.175935 | orchestrator | changed: [testbed-manager] 2025-06-02 13:09:21.176101 | orchestrator | 2025-06-02 13:09:21.176135 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-02 13:09:21.176157 | orchestrator | Monday 02 June 2025 13:09:21 +0000 (0:01:04.803) 0:01:04.911 *********** 2025-06-02 13:10:10.250453 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-02 13:10:10.250580 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-02 13:10:10.250598 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-02 13:10:10.250623 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-02 13:10:10.250660 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-02 13:10:10.250730 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-02 13:10:10.251744 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-02 13:10:10.252780 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-02 13:10:10.253292 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-02 13:10:10.254281 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-02 13:10:10.254989 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-02 13:10:10.255640 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-02 13:10:10.256413 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-02 13:10:10.257171 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-02 13:10:10.258013 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-02 13:10:10.258709 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-02 13:10:10.261804 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-02 13:10:10.262717 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-02 13:10:10.264643 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-02 13:10:10.267997 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-02 13:10:10.268022 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-02 13:10:10.268609 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-02 13:10:10.271214 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-02 13:10:10.271242 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-02 13:10:10.271471 | orchestrator | 2025-06-02 13:10:10.272366 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:10:10.272627 | orchestrator | 2025-06-02 13:10:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:10:10.272781 | orchestrator | 2025-06-02 13:10:10 | INFO  | Please wait and do not abort execution. 2025-06-02 13:10:10.273717 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:10:10.274135 | orchestrator | 2025-06-02 13:10:10.274600 | orchestrator | 2025-06-02 13:10:10.275026 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:10:10.275407 | orchestrator | Monday 02 June 2025 13:10:10 +0000 (0:00:49.076) 0:01:53.988 *********** 2025-06-02 13:10:10.275814 | orchestrator | =============================================================================== 2025-06-02 13:10:10.276314 | orchestrator | Pull keystone image ---------------------------------------------------- 64.80s 2025-06-02 13:10:10.276670 | orchestrator | Pull other images ------------------------------------------------------ 49.08s 2025-06-02 13:10:12.185273 | orchestrator | 2025-06-02 13:10:12 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-02 13:10:12.190398 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:10:12.190447 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:10:12.190517 | orchestrator | Registering Redlock._release_script 2025-06-02 13:10:12.249378 | orchestrator | 2025-06-02 13:10:12 | INFO  | Task bf12e68b-744d-49d8-bbf6-7d5dfe50d485 (wipe-partitions) was prepared for execution. 2025-06-02 13:10:12.249496 | orchestrator | 2025-06-02 13:10:12 | INFO  | It takes a moment until task bf12e68b-744d-49d8-bbf6-7d5dfe50d485 (wipe-partitions) has been started and output is visible here. 2025-06-02 13:10:15.891658 | orchestrator | 2025-06-02 13:10:15.891968 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-02 13:10:15.892496 | orchestrator | 2025-06-02 13:10:15.893236 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-02 13:10:15.893884 | orchestrator | Monday 02 June 2025 13:10:15 +0000 (0:00:00.103) 0:00:00.103 *********** 2025-06-02 13:10:16.418290 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:10:16.418550 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:10:16.418649 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:10:16.422076 | orchestrator | 2025-06-02 13:10:16.422386 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-02 13:10:16.423933 | orchestrator | Monday 02 June 2025 13:10:16 +0000 (0:00:00.527) 0:00:00.630 *********** 2025-06-02 13:10:16.550611 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:16.641189 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:16.644352 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:16.644572 | orchestrator | 2025-06-02 13:10:16.646135 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-02 13:10:16.646192 | orchestrator | Monday 02 June 2025 13:10:16 +0000 (0:00:00.222) 0:00:00.853 *********** 2025-06-02 13:10:17.261922 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:17.262121 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:17.262520 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:17.263555 | orchestrator | 2025-06-02 13:10:17.266736 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-02 13:10:17.267702 | orchestrator | Monday 02 June 2025 13:10:17 +0000 (0:00:00.621) 0:00:01.474 *********** 2025-06-02 13:10:17.401909 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:17.490444 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:17.490790 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:17.491121 | orchestrator | 2025-06-02 13:10:17.491580 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-02 13:10:17.494801 | orchestrator | Monday 02 June 2025 13:10:17 +0000 (0:00:00.228) 0:00:01.703 *********** 2025-06-02 13:10:18.658773 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 13:10:18.658863 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 13:10:18.658877 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 13:10:18.659054 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 13:10:18.659483 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 13:10:18.660039 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 13:10:18.660088 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 13:10:18.660392 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 13:10:18.660793 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 13:10:18.661067 | orchestrator | 2025-06-02 13:10:18.661491 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-02 13:10:18.661685 | orchestrator | Monday 02 June 2025 13:10:18 +0000 (0:00:01.163) 0:00:02.867 *********** 2025-06-02 13:10:19.999289 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 13:10:19.999371 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 13:10:20.000012 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 13:10:20.000907 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 13:10:20.001893 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 13:10:20.002562 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 13:10:20.002916 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 13:10:20.003561 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 13:10:20.003874 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 13:10:20.004221 | orchestrator | 2025-06-02 13:10:20.005011 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-02 13:10:20.005411 | orchestrator | Monday 02 June 2025 13:10:19 +0000 (0:00:01.339) 0:00:04.207 *********** 2025-06-02 13:10:22.145369 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 13:10:22.145882 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 13:10:22.146553 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 13:10:22.147006 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 13:10:22.151287 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 13:10:22.151336 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 13:10:22.151348 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 13:10:22.151359 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 13:10:22.151370 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 13:10:22.151381 | orchestrator | 2025-06-02 13:10:22.151393 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-02 13:10:22.151404 | orchestrator | Monday 02 June 2025 13:10:22 +0000 (0:00:02.150) 0:00:06.358 *********** 2025-06-02 13:10:22.686994 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:10:22.687072 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:10:22.687084 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:10:22.687096 | orchestrator | 2025-06-02 13:10:22.687191 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-02 13:10:22.687251 | orchestrator | Monday 02 June 2025 13:10:22 +0000 (0:00:00.540) 0:00:06.898 *********** 2025-06-02 13:10:23.289781 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:10:23.290532 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:10:23.292025 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:10:23.292865 | orchestrator | 2025-06-02 13:10:23.294318 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:10:23.295277 | orchestrator | 2025-06-02 13:10:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:10:23.295567 | orchestrator | 2025-06-02 13:10:23 | INFO  | Please wait and do not abort execution. 2025-06-02 13:10:23.296810 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:23.298198 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:23.298887 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:23.299470 | orchestrator | 2025-06-02 13:10:23.300030 | orchestrator | 2025-06-02 13:10:23.300679 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:10:23.301057 | orchestrator | Monday 02 June 2025 13:10:23 +0000 (0:00:00.599) 0:00:07.498 *********** 2025-06-02 13:10:23.301511 | orchestrator | =============================================================================== 2025-06-02 13:10:23.302008 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2025-06-02 13:10:23.302433 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2025-06-02 13:10:23.302896 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2025-06-02 13:10:23.303412 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2025-06-02 13:10:23.303998 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-06-02 13:10:23.304423 | orchestrator | Reload udev rules ------------------------------------------------------- 0.54s 2025-06-02 13:10:23.306958 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.53s 2025-06-02 13:10:23.307118 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-06-02 13:10:23.307517 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-06-02 13:10:25.334788 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:10:25.334872 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:10:25.334886 | orchestrator | Registering Redlock._release_script 2025-06-02 13:10:25.395353 | orchestrator | 2025-06-02 13:10:25 | INFO  | Task 3aad9db1-82d6-4b07-b662-b79f4a19e788 (facts) was prepared for execution. 2025-06-02 13:10:25.395443 | orchestrator | 2025-06-02 13:10:25 | INFO  | It takes a moment until task 3aad9db1-82d6-4b07-b662-b79f4a19e788 (facts) has been started and output is visible here. 2025-06-02 13:10:29.183033 | orchestrator | 2025-06-02 13:10:29.183132 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 13:10:29.183847 | orchestrator | 2025-06-02 13:10:29.184769 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 13:10:29.186230 | orchestrator | Monday 02 June 2025 13:10:29 +0000 (0:00:00.245) 0:00:00.245 *********** 2025-06-02 13:10:30.379434 | orchestrator | ok: [testbed-manager] 2025-06-02 13:10:30.379551 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:10:30.380313 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:10:30.380859 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:10:30.381649 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:30.382328 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:30.382351 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:30.383661 | orchestrator | 2025-06-02 13:10:30.385343 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 13:10:30.385955 | orchestrator | Monday 02 June 2025 13:10:30 +0000 (0:00:01.197) 0:00:01.443 *********** 2025-06-02 13:10:30.535921 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:10:30.616674 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:10:30.694634 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:10:30.770110 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:10:30.843156 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:31.553680 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:31.555842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:31.558124 | orchestrator | 2025-06-02 13:10:31.559715 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 13:10:31.561108 | orchestrator | 2025-06-02 13:10:31.562366 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 13:10:31.562885 | orchestrator | Monday 02 June 2025 13:10:31 +0000 (0:00:01.182) 0:00:02.625 *********** 2025-06-02 13:10:36.110561 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:10:36.110660 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:10:36.110671 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:10:36.110681 | orchestrator | ok: [testbed-manager] 2025-06-02 13:10:36.110690 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:36.110699 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:36.110707 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:10:36.110716 | orchestrator | 2025-06-02 13:10:36.110727 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 13:10:36.110736 | orchestrator | 2025-06-02 13:10:36.110746 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 13:10:36.113711 | orchestrator | Monday 02 June 2025 13:10:36 +0000 (0:00:04.549) 0:00:07.174 *********** 2025-06-02 13:10:36.462322 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:10:36.541558 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:10:36.617166 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:10:36.703996 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:10:36.780326 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:36.829297 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:36.829551 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:10:36.829775 | orchestrator | 2025-06-02 13:10:36.830602 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:10:36.830640 | orchestrator | 2025-06-02 13:10:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:10:36.830656 | orchestrator | 2025-06-02 13:10:36 | INFO  | Please wait and do not abort execution. 2025-06-02 13:10:36.830902 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:36.831151 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:36.831521 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:36.831775 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:36.831979 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:36.832240 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:36.832599 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:10:36.833589 | orchestrator | 2025-06-02 13:10:36.834422 | orchestrator | 2025-06-02 13:10:36.834690 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:10:36.835124 | orchestrator | Monday 02 June 2025 13:10:36 +0000 (0:00:00.725) 0:00:07.900 *********** 2025-06-02 13:10:36.836861 | orchestrator | =============================================================================== 2025-06-02 13:10:36.838840 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.55s 2025-06-02 13:10:36.838950 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2025-06-02 13:10:36.838971 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-06-02 13:10:36.838983 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.73s 2025-06-02 13:10:39.254264 | orchestrator | 2025-06-02 13:10:39 | INFO  | Task 08998d72-1e55-452f-91a6-be477d5ba478 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-02 13:10:39.254384 | orchestrator | 2025-06-02 13:10:39 | INFO  | It takes a moment until task 08998d72-1e55-452f-91a6-be477d5ba478 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-02 13:10:43.574999 | orchestrator | 2025-06-02 13:10:43.575327 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 13:10:43.579582 | orchestrator | 2025-06-02 13:10:43.581595 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:10:43.581689 | orchestrator | Monday 02 June 2025 13:10:43 +0000 (0:00:00.312) 0:00:00.312 *********** 2025-06-02 13:10:43.813712 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:43.813813 | orchestrator | 2025-06-02 13:10:43.813830 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:10:43.813842 | orchestrator | Monday 02 June 2025 13:10:43 +0000 (0:00:00.239) 0:00:00.551 *********** 2025-06-02 13:10:44.042839 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:44.044618 | orchestrator | 2025-06-02 13:10:44.045427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:44.046120 | orchestrator | Monday 02 June 2025 13:10:44 +0000 (0:00:00.226) 0:00:00.778 *********** 2025-06-02 13:10:44.428983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 13:10:44.429808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 13:10:44.429850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 13:10:44.433368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 13:10:44.436608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 13:10:44.437428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 13:10:44.439977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 13:10:44.440402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 13:10:44.441041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 13:10:44.441493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 13:10:44.441848 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 13:10:44.442656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 13:10:44.442796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 13:10:44.443117 | orchestrator | 2025-06-02 13:10:44.443860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:44.447030 | orchestrator | Monday 02 June 2025 13:10:44 +0000 (0:00:00.390) 0:00:01.169 *********** 2025-06-02 13:10:44.840978 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:44.842725 | orchestrator | 2025-06-02 13:10:44.843223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:44.843855 | orchestrator | Monday 02 June 2025 13:10:44 +0000 (0:00:00.413) 0:00:01.582 *********** 2025-06-02 13:10:45.019060 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:45.021125 | orchestrator | 2025-06-02 13:10:45.021707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.022177 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.175) 0:00:01.757 *********** 2025-06-02 13:10:45.191640 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:45.191780 | orchestrator | 2025-06-02 13:10:45.192306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.192598 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.174) 0:00:01.932 *********** 2025-06-02 13:10:45.362321 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:45.363989 | orchestrator | 2025-06-02 13:10:45.367753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.368512 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.170) 0:00:02.102 *********** 2025-06-02 13:10:45.539829 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:45.540220 | orchestrator | 2025-06-02 13:10:45.540760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.541622 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.177) 0:00:02.280 *********** 2025-06-02 13:10:45.704362 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:45.704592 | orchestrator | 2025-06-02 13:10:45.705219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.705648 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.162) 0:00:02.442 *********** 2025-06-02 13:10:45.895240 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:45.896657 | orchestrator | 2025-06-02 13:10:45.899707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:45.899748 | orchestrator | Monday 02 June 2025 13:10:45 +0000 (0:00:00.193) 0:00:02.636 *********** 2025-06-02 13:10:46.066174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:46.066653 | orchestrator | 2025-06-02 13:10:46.067411 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:46.067868 | orchestrator | Monday 02 June 2025 13:10:46 +0000 (0:00:00.170) 0:00:02.806 *********** 2025-06-02 13:10:46.455928 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb) 2025-06-02 13:10:46.456335 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb) 2025-06-02 13:10:46.457606 | orchestrator | 2025-06-02 13:10:46.458303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:46.459525 | orchestrator | Monday 02 June 2025 13:10:46 +0000 (0:00:00.390) 0:00:03.196 *********** 2025-06-02 13:10:46.827899 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff) 2025-06-02 13:10:46.828495 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff) 2025-06-02 13:10:46.829148 | orchestrator | 2025-06-02 13:10:46.832952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:46.833488 | orchestrator | Monday 02 June 2025 13:10:46 +0000 (0:00:00.372) 0:00:03.568 *********** 2025-06-02 13:10:47.360830 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5) 2025-06-02 13:10:47.361371 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5) 2025-06-02 13:10:47.362124 | orchestrator | 2025-06-02 13:10:47.363023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:47.365186 | orchestrator | Monday 02 June 2025 13:10:47 +0000 (0:00:00.529) 0:00:04.097 *********** 2025-06-02 13:10:47.869872 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4) 2025-06-02 13:10:47.870564 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4) 2025-06-02 13:10:47.871650 | orchestrator | 2025-06-02 13:10:47.872642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:47.873072 | orchestrator | Monday 02 June 2025 13:10:47 +0000 (0:00:00.512) 0:00:04.610 *********** 2025-06-02 13:10:48.441388 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:10:48.442110 | orchestrator | 2025-06-02 13:10:48.442357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:48.443965 | orchestrator | Monday 02 June 2025 13:10:48 +0000 (0:00:00.572) 0:00:05.183 *********** 2025-06-02 13:10:48.767030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 13:10:48.768630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 13:10:48.768703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 13:10:48.768717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 13:10:48.772289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 13:10:48.772318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 13:10:48.772962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 13:10:48.773702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 13:10:48.774304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 13:10:48.775157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 13:10:48.775607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 13:10:48.775992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 13:10:48.776482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 13:10:48.777105 | orchestrator | 2025-06-02 13:10:48.777128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:48.779013 | orchestrator | Monday 02 June 2025 13:10:48 +0000 (0:00:00.324) 0:00:05.507 *********** 2025-06-02 13:10:48.953144 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:48.953319 | orchestrator | 2025-06-02 13:10:48.955227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:48.957163 | orchestrator | Monday 02 June 2025 13:10:48 +0000 (0:00:00.184) 0:00:05.691 *********** 2025-06-02 13:10:49.141097 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:49.143673 | orchestrator | 2025-06-02 13:10:49.143728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.144061 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.186) 0:00:05.878 *********** 2025-06-02 13:10:49.321707 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:49.321913 | orchestrator | 2025-06-02 13:10:49.322426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.322881 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.181) 0:00:06.059 *********** 2025-06-02 13:10:49.525416 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:49.525593 | orchestrator | 2025-06-02 13:10:49.526509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.526546 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.206) 0:00:06.266 *********** 2025-06-02 13:10:49.704082 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:49.706603 | orchestrator | 2025-06-02 13:10:49.710319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.712539 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.176) 0:00:06.442 *********** 2025-06-02 13:10:49.904261 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:49.905682 | orchestrator | 2025-06-02 13:10:49.906956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:49.907321 | orchestrator | Monday 02 June 2025 13:10:49 +0000 (0:00:00.198) 0:00:06.640 *********** 2025-06-02 13:10:50.079224 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:50.080051 | orchestrator | 2025-06-02 13:10:50.080785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:50.081338 | orchestrator | Monday 02 June 2025 13:10:50 +0000 (0:00:00.175) 0:00:06.816 *********** 2025-06-02 13:10:50.267967 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:50.270369 | orchestrator | 2025-06-02 13:10:50.270400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:50.270825 | orchestrator | Monday 02 June 2025 13:10:50 +0000 (0:00:00.190) 0:00:07.007 *********** 2025-06-02 13:10:51.052356 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 13:10:51.055432 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 13:10:51.055880 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 13:10:51.056320 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 13:10:51.056856 | orchestrator | 2025-06-02 13:10:51.057344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.057887 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.783) 0:00:07.790 *********** 2025-06-02 13:10:51.257366 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:51.257486 | orchestrator | 2025-06-02 13:10:51.257513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.257913 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.205) 0:00:07.996 *********** 2025-06-02 13:10:51.455299 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:51.457658 | orchestrator | 2025-06-02 13:10:51.457714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.457771 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.200) 0:00:08.196 *********** 2025-06-02 13:10:51.659156 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:51.660824 | orchestrator | 2025-06-02 13:10:51.660855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:10:51.660867 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.202) 0:00:08.398 *********** 2025-06-02 13:10:51.859808 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:51.860295 | orchestrator | 2025-06-02 13:10:51.861081 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 13:10:51.863738 | orchestrator | Monday 02 June 2025 13:10:51 +0000 (0:00:00.202) 0:00:08.601 *********** 2025-06-02 13:10:52.034202 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-02 13:10:52.034469 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-02 13:10:52.036604 | orchestrator | 2025-06-02 13:10:52.037935 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 13:10:52.038756 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.171) 0:00:08.772 *********** 2025-06-02 13:10:52.156902 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:52.158348 | orchestrator | 2025-06-02 13:10:52.159147 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 13:10:52.159631 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.123) 0:00:08.896 *********** 2025-06-02 13:10:52.281800 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:52.282559 | orchestrator | 2025-06-02 13:10:52.283020 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 13:10:52.283298 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.126) 0:00:09.023 *********** 2025-06-02 13:10:52.412821 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:52.414129 | orchestrator | 2025-06-02 13:10:52.416786 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 13:10:52.417681 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.130) 0:00:09.153 *********** 2025-06-02 13:10:52.538639 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:52.538696 | orchestrator | 2025-06-02 13:10:52.539853 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 13:10:52.540328 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.125) 0:00:09.279 *********** 2025-06-02 13:10:52.693497 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}}) 2025-06-02 13:10:52.694524 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}}) 2025-06-02 13:10:52.695314 | orchestrator | 2025-06-02 13:10:52.697780 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 13:10:52.698203 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.154) 0:00:09.434 *********** 2025-06-02 13:10:52.832307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}})  2025-06-02 13:10:52.832846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}})  2025-06-02 13:10:52.832987 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:52.834370 | orchestrator | 2025-06-02 13:10:52.835732 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 13:10:52.835762 | orchestrator | Monday 02 June 2025 13:10:52 +0000 (0:00:00.137) 0:00:09.572 *********** 2025-06-02 13:10:53.128310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}})  2025-06-02 13:10:53.128702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}})  2025-06-02 13:10:53.130467 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:53.130510 | orchestrator | 2025-06-02 13:10:53.130932 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 13:10:53.130959 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.295) 0:00:09.867 *********** 2025-06-02 13:10:53.257074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}})  2025-06-02 13:10:53.258308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}})  2025-06-02 13:10:53.258973 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:53.261200 | orchestrator | 2025-06-02 13:10:53.261949 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 13:10:53.262106 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.130) 0:00:09.998 *********** 2025-06-02 13:10:53.396906 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:53.397338 | orchestrator | 2025-06-02 13:10:53.397741 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 13:10:53.398353 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.140) 0:00:10.138 *********** 2025-06-02 13:10:53.536622 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:10:53.536665 | orchestrator | 2025-06-02 13:10:53.536844 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 13:10:53.537713 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.138) 0:00:10.276 *********** 2025-06-02 13:10:53.648960 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:53.651563 | orchestrator | 2025-06-02 13:10:53.651603 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 13:10:53.651633 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.112) 0:00:10.389 *********** 2025-06-02 13:10:53.772167 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:53.772252 | orchestrator | 2025-06-02 13:10:53.772281 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 13:10:53.772303 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.122) 0:00:10.512 *********** 2025-06-02 13:10:53.886166 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:53.889546 | orchestrator | 2025-06-02 13:10:53.889859 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 13:10:53.891757 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.109) 0:00:10.622 *********** 2025-06-02 13:10:53.996021 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:10:53.996821 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:53.998748 | orchestrator |  "sdb": { 2025-06-02 13:10:53.999473 | orchestrator |  "osd_lvm_uuid": "16065c32-ca37-5a4d-8ac9-40bfcb225d4e" 2025-06-02 13:10:54.000936 | orchestrator |  }, 2025-06-02 13:10:54.003025 | orchestrator |  "sdc": { 2025-06-02 13:10:54.003501 | orchestrator |  "osd_lvm_uuid": "8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb" 2025-06-02 13:10:54.004133 | orchestrator |  } 2025-06-02 13:10:54.006808 | orchestrator |  } 2025-06-02 13:10:54.007208 | orchestrator | } 2025-06-02 13:10:54.007529 | orchestrator | 2025-06-02 13:10:54.007880 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 13:10:54.008308 | orchestrator | Monday 02 June 2025 13:10:53 +0000 (0:00:00.113) 0:00:10.736 *********** 2025-06-02 13:10:54.122396 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:54.123992 | orchestrator | 2025-06-02 13:10:54.124606 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 13:10:54.125046 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.124) 0:00:10.860 *********** 2025-06-02 13:10:54.241999 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:54.242126 | orchestrator | 2025-06-02 13:10:54.242141 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 13:10:54.242158 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.120) 0:00:10.981 *********** 2025-06-02 13:10:54.356873 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:10:54.358978 | orchestrator | 2025-06-02 13:10:54.361734 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 13:10:54.364532 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.114) 0:00:11.096 *********** 2025-06-02 13:10:54.533577 | orchestrator | changed: [testbed-node-3] => { 2025-06-02 13:10:54.535171 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 13:10:54.540357 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:10:54.540400 | orchestrator |  "sdb": { 2025-06-02 13:10:54.540414 | orchestrator |  "osd_lvm_uuid": "16065c32-ca37-5a4d-8ac9-40bfcb225d4e" 2025-06-02 13:10:54.540616 | orchestrator |  }, 2025-06-02 13:10:54.541537 | orchestrator |  "sdc": { 2025-06-02 13:10:54.542088 | orchestrator |  "osd_lvm_uuid": "8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb" 2025-06-02 13:10:54.542553 | orchestrator |  } 2025-06-02 13:10:54.543144 | orchestrator |  }, 2025-06-02 13:10:54.543714 | orchestrator |  "lvm_volumes": [ 2025-06-02 13:10:54.544242 | orchestrator |  { 2025-06-02 13:10:54.544691 | orchestrator |  "data": "osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e", 2025-06-02 13:10:54.545192 | orchestrator |  "data_vg": "ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e" 2025-06-02 13:10:54.546579 | orchestrator |  }, 2025-06-02 13:10:54.549536 | orchestrator |  { 2025-06-02 13:10:54.549552 | orchestrator |  "data": "osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb", 2025-06-02 13:10:54.549558 | orchestrator |  "data_vg": "ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb" 2025-06-02 13:10:54.549563 | orchestrator |  } 2025-06-02 13:10:54.549569 | orchestrator |  ] 2025-06-02 13:10:54.550007 | orchestrator |  } 2025-06-02 13:10:54.550186 | orchestrator | } 2025-06-02 13:10:54.550557 | orchestrator | 2025-06-02 13:10:54.550950 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 13:10:54.551257 | orchestrator | Monday 02 June 2025 13:10:54 +0000 (0:00:00.177) 0:00:11.273 *********** 2025-06-02 13:10:56.371152 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:56.371240 | orchestrator | 2025-06-02 13:10:56.372749 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 13:10:56.372773 | orchestrator | 2025-06-02 13:10:56.372996 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:10:56.373225 | orchestrator | Monday 02 June 2025 13:10:56 +0000 (0:00:01.838) 0:00:13.112 *********** 2025-06-02 13:10:56.632592 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 13:10:56.632778 | orchestrator | 2025-06-02 13:10:56.633098 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:10:56.636116 | orchestrator | Monday 02 June 2025 13:10:56 +0000 (0:00:00.259) 0:00:13.372 *********** 2025-06-02 13:10:56.823888 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:10:56.823992 | orchestrator | 2025-06-02 13:10:56.825018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:56.825131 | orchestrator | Monday 02 June 2025 13:10:56 +0000 (0:00:00.191) 0:00:13.564 *********** 2025-06-02 13:10:57.126617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 13:10:57.126724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 13:10:57.129934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 13:10:57.129965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 13:10:57.130136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 13:10:57.130910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 13:10:57.130943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 13:10:57.133240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 13:10:57.133564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 13:10:57.133787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 13:10:57.134143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 13:10:57.134679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 13:10:57.134704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 13:10:57.135067 | orchestrator | 2025-06-02 13:10:57.135419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:57.135768 | orchestrator | Monday 02 June 2025 13:10:57 +0000 (0:00:00.302) 0:00:13.867 *********** 2025-06-02 13:10:57.310426 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:57.310634 | orchestrator | 2025-06-02 13:10:57.310840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:57.310861 | orchestrator | Monday 02 June 2025 13:10:57 +0000 (0:00:00.179) 0:00:14.046 *********** 2025-06-02 13:10:57.499549 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:57.499631 | orchestrator | 2025-06-02 13:10:57.499735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:57.499944 | orchestrator | Monday 02 June 2025 13:10:57 +0000 (0:00:00.191) 0:00:14.238 *********** 2025-06-02 13:10:57.676407 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:57.677554 | orchestrator | 2025-06-02 13:10:57.677839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:57.680683 | orchestrator | Monday 02 June 2025 13:10:57 +0000 (0:00:00.178) 0:00:14.416 *********** 2025-06-02 13:10:57.820058 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:57.820170 | orchestrator | 2025-06-02 13:10:57.820343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:57.820369 | orchestrator | Monday 02 June 2025 13:10:57 +0000 (0:00:00.138) 0:00:14.555 *********** 2025-06-02 13:10:58.208807 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:58.209378 | orchestrator | 2025-06-02 13:10:58.209857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:58.210277 | orchestrator | Monday 02 June 2025 13:10:58 +0000 (0:00:00.393) 0:00:14.948 *********** 2025-06-02 13:10:58.348809 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:58.350073 | orchestrator | 2025-06-02 13:10:58.350152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:58.350507 | orchestrator | Monday 02 June 2025 13:10:58 +0000 (0:00:00.138) 0:00:15.087 *********** 2025-06-02 13:10:58.493469 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:58.496769 | orchestrator | 2025-06-02 13:10:58.496999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:58.497238 | orchestrator | Monday 02 June 2025 13:10:58 +0000 (0:00:00.147) 0:00:15.235 *********** 2025-06-02 13:10:58.638054 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:10:58.638200 | orchestrator | 2025-06-02 13:10:58.638247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:58.638507 | orchestrator | Monday 02 June 2025 13:10:58 +0000 (0:00:00.142) 0:00:15.377 *********** 2025-06-02 13:10:58.931136 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd) 2025-06-02 13:10:58.931208 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd) 2025-06-02 13:10:58.931414 | orchestrator | 2025-06-02 13:10:58.931714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:58.934749 | orchestrator | Monday 02 June 2025 13:10:58 +0000 (0:00:00.295) 0:00:15.672 *********** 2025-06-02 13:10:59.239393 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27) 2025-06-02 13:10:59.239512 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27) 2025-06-02 13:10:59.239593 | orchestrator | 2025-06-02 13:10:59.239833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:59.239898 | orchestrator | Monday 02 June 2025 13:10:59 +0000 (0:00:00.305) 0:00:15.978 *********** 2025-06-02 13:10:59.557138 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5) 2025-06-02 13:10:59.557208 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5) 2025-06-02 13:10:59.559813 | orchestrator | 2025-06-02 13:10:59.559968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:59.560967 | orchestrator | Monday 02 June 2025 13:10:59 +0000 (0:00:00.315) 0:00:16.294 *********** 2025-06-02 13:10:59.888428 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85) 2025-06-02 13:10:59.888580 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85) 2025-06-02 13:10:59.888662 | orchestrator | 2025-06-02 13:10:59.888886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:10:59.889161 | orchestrator | Monday 02 June 2025 13:10:59 +0000 (0:00:00.335) 0:00:16.630 *********** 2025-06-02 13:11:00.150230 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:11:00.150395 | orchestrator | 2025-06-02 13:11:00.150926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:00.152471 | orchestrator | Monday 02 June 2025 13:11:00 +0000 (0:00:00.255) 0:00:16.885 *********** 2025-06-02 13:11:00.418578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 13:11:00.418720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 13:11:00.418953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 13:11:00.419186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 13:11:00.419391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 13:11:00.420800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 13:11:00.421231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 13:11:00.421374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 13:11:00.421576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 13:11:00.421997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 13:11:00.422299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 13:11:00.423510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 13:11:00.423605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 13:11:00.423883 | orchestrator | 2025-06-02 13:11:00.424143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:00.424424 | orchestrator | Monday 02 June 2025 13:11:00 +0000 (0:00:00.274) 0:00:17.160 *********** 2025-06-02 13:11:00.555995 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:00.556100 | orchestrator | 2025-06-02 13:11:00.556246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:00.557769 | orchestrator | Monday 02 June 2025 13:11:00 +0000 (0:00:00.136) 0:00:17.297 *********** 2025-06-02 13:11:00.961901 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:00.962000 | orchestrator | 2025-06-02 13:11:00.962076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:00.962174 | orchestrator | Monday 02 June 2025 13:11:00 +0000 (0:00:00.404) 0:00:17.701 *********** 2025-06-02 13:11:01.102192 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:01.102815 | orchestrator | 2025-06-02 13:11:01.102923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:01.103157 | orchestrator | Monday 02 June 2025 13:11:01 +0000 (0:00:00.139) 0:00:17.841 *********** 2025-06-02 13:11:01.238387 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:01.240743 | orchestrator | 2025-06-02 13:11:01.240861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:01.241101 | orchestrator | Monday 02 June 2025 13:11:01 +0000 (0:00:00.137) 0:00:17.978 *********** 2025-06-02 13:11:01.377616 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:01.377811 | orchestrator | 2025-06-02 13:11:01.377829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:01.377868 | orchestrator | Monday 02 June 2025 13:11:01 +0000 (0:00:00.139) 0:00:18.118 *********** 2025-06-02 13:11:01.529715 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:01.530139 | orchestrator | 2025-06-02 13:11:01.530174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:01.531300 | orchestrator | Monday 02 June 2025 13:11:01 +0000 (0:00:00.152) 0:00:18.271 *********** 2025-06-02 13:11:01.727509 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:01.727976 | orchestrator | 2025-06-02 13:11:01.728900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:01.730356 | orchestrator | Monday 02 June 2025 13:11:01 +0000 (0:00:00.196) 0:00:18.467 *********** 2025-06-02 13:11:01.909687 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:01.909768 | orchestrator | 2025-06-02 13:11:01.910231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:01.911263 | orchestrator | Monday 02 June 2025 13:11:01 +0000 (0:00:00.179) 0:00:18.647 *********** 2025-06-02 13:11:02.410840 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 13:11:02.410928 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 13:11:02.411160 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 13:11:02.411545 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 13:11:02.412605 | orchestrator | 2025-06-02 13:11:02.413337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:02.413898 | orchestrator | Monday 02 June 2025 13:11:02 +0000 (0:00:00.501) 0:00:19.149 *********** 2025-06-02 13:11:02.590942 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:02.592411 | orchestrator | 2025-06-02 13:11:02.596020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:02.596492 | orchestrator | Monday 02 June 2025 13:11:02 +0000 (0:00:00.181) 0:00:19.331 *********** 2025-06-02 13:11:02.772814 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:02.772893 | orchestrator | 2025-06-02 13:11:02.773781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:02.774613 | orchestrator | Monday 02 June 2025 13:11:02 +0000 (0:00:00.179) 0:00:19.510 *********** 2025-06-02 13:11:02.935506 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:02.935877 | orchestrator | 2025-06-02 13:11:02.936218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:02.936270 | orchestrator | Monday 02 June 2025 13:11:02 +0000 (0:00:00.166) 0:00:19.677 *********** 2025-06-02 13:11:03.109662 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:03.110159 | orchestrator | 2025-06-02 13:11:03.110191 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 13:11:03.110409 | orchestrator | Monday 02 June 2025 13:11:03 +0000 (0:00:00.171) 0:00:19.848 *********** 2025-06-02 13:11:03.355372 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-02 13:11:03.355550 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-02 13:11:03.356056 | orchestrator | 2025-06-02 13:11:03.361242 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 13:11:03.362537 | orchestrator | Monday 02 June 2025 13:11:03 +0000 (0:00:00.246) 0:00:20.095 *********** 2025-06-02 13:11:03.485273 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:03.485931 | orchestrator | 2025-06-02 13:11:03.486260 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 13:11:03.487014 | orchestrator | Monday 02 June 2025 13:11:03 +0000 (0:00:00.130) 0:00:20.226 *********** 2025-06-02 13:11:03.596653 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:03.599130 | orchestrator | 2025-06-02 13:11:03.599468 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 13:11:03.600603 | orchestrator | Monday 02 June 2025 13:11:03 +0000 (0:00:00.111) 0:00:20.337 *********** 2025-06-02 13:11:03.723412 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:03.723820 | orchestrator | 2025-06-02 13:11:03.726079 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 13:11:03.726250 | orchestrator | Monday 02 June 2025 13:11:03 +0000 (0:00:00.125) 0:00:20.463 *********** 2025-06-02 13:11:03.835682 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:11:03.836129 | orchestrator | 2025-06-02 13:11:03.837358 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 13:11:03.838578 | orchestrator | Monday 02 June 2025 13:11:03 +0000 (0:00:00.110) 0:00:20.574 *********** 2025-06-02 13:11:03.964117 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d6dea29-b52d-558c-8900-475fd450038e'}}) 2025-06-02 13:11:03.964467 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '903578c2-c0cc-5204-b647-273ed346895e'}}) 2025-06-02 13:11:03.965188 | orchestrator | 2025-06-02 13:11:03.965491 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 13:11:03.966815 | orchestrator | Monday 02 June 2025 13:11:03 +0000 (0:00:00.131) 0:00:20.705 *********** 2025-06-02 13:11:04.089825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d6dea29-b52d-558c-8900-475fd450038e'}})  2025-06-02 13:11:04.090223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '903578c2-c0cc-5204-b647-273ed346895e'}})  2025-06-02 13:11:04.090712 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:04.091717 | orchestrator | 2025-06-02 13:11:04.092692 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 13:11:04.095389 | orchestrator | Monday 02 June 2025 13:11:04 +0000 (0:00:00.122) 0:00:20.827 *********** 2025-06-02 13:11:04.225504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d6dea29-b52d-558c-8900-475fd450038e'}})  2025-06-02 13:11:04.226055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '903578c2-c0cc-5204-b647-273ed346895e'}})  2025-06-02 13:11:04.227359 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:04.227398 | orchestrator | 2025-06-02 13:11:04.227418 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 13:11:04.227473 | orchestrator | Monday 02 June 2025 13:11:04 +0000 (0:00:00.134) 0:00:20.962 *********** 2025-06-02 13:11:04.365904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d6dea29-b52d-558c-8900-475fd450038e'}})  2025-06-02 13:11:04.368127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '903578c2-c0cc-5204-b647-273ed346895e'}})  2025-06-02 13:11:04.368688 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:04.370394 | orchestrator | 2025-06-02 13:11:04.371535 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 13:11:04.372053 | orchestrator | Monday 02 June 2025 13:11:04 +0000 (0:00:00.141) 0:00:21.104 *********** 2025-06-02 13:11:04.477353 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:11:04.479197 | orchestrator | 2025-06-02 13:11:04.480397 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 13:11:04.480986 | orchestrator | Monday 02 June 2025 13:11:04 +0000 (0:00:00.112) 0:00:21.217 *********** 2025-06-02 13:11:04.586878 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:11:04.590775 | orchestrator | 2025-06-02 13:11:04.592247 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 13:11:04.594333 | orchestrator | Monday 02 June 2025 13:11:04 +0000 (0:00:00.102) 0:00:21.319 *********** 2025-06-02 13:11:04.697459 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:04.697628 | orchestrator | 2025-06-02 13:11:04.700683 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 13:11:04.700713 | orchestrator | Monday 02 June 2025 13:11:04 +0000 (0:00:00.116) 0:00:21.436 *********** 2025-06-02 13:11:04.923400 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:04.924203 | orchestrator | 2025-06-02 13:11:04.929357 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 13:11:04.929484 | orchestrator | Monday 02 June 2025 13:11:04 +0000 (0:00:00.227) 0:00:21.663 *********** 2025-06-02 13:11:05.044708 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:05.045559 | orchestrator | 2025-06-02 13:11:05.045607 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 13:11:05.046235 | orchestrator | Monday 02 June 2025 13:11:05 +0000 (0:00:00.120) 0:00:21.783 *********** 2025-06-02 13:11:05.186810 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:11:05.187170 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:11:05.188065 | orchestrator |  "sdb": { 2025-06-02 13:11:05.188099 | orchestrator |  "osd_lvm_uuid": "4d6dea29-b52d-558c-8900-475fd450038e" 2025-06-02 13:11:05.188233 | orchestrator |  }, 2025-06-02 13:11:05.189890 | orchestrator |  "sdc": { 2025-06-02 13:11:05.190113 | orchestrator |  "osd_lvm_uuid": "903578c2-c0cc-5204-b647-273ed346895e" 2025-06-02 13:11:05.190571 | orchestrator |  } 2025-06-02 13:11:05.190607 | orchestrator |  } 2025-06-02 13:11:05.190740 | orchestrator | } 2025-06-02 13:11:05.191035 | orchestrator | 2025-06-02 13:11:05.191303 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 13:11:05.192406 | orchestrator | Monday 02 June 2025 13:11:05 +0000 (0:00:00.144) 0:00:21.928 *********** 2025-06-02 13:11:05.294493 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:05.294572 | orchestrator | 2025-06-02 13:11:05.296617 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 13:11:05.296713 | orchestrator | Monday 02 June 2025 13:11:05 +0000 (0:00:00.103) 0:00:22.031 *********** 2025-06-02 13:11:05.392529 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:05.394164 | orchestrator | 2025-06-02 13:11:05.396705 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 13:11:05.397122 | orchestrator | Monday 02 June 2025 13:11:05 +0000 (0:00:00.098) 0:00:22.130 *********** 2025-06-02 13:11:05.519741 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:11:05.526211 | orchestrator | 2025-06-02 13:11:05.527253 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 13:11:05.528382 | orchestrator | Monday 02 June 2025 13:11:05 +0000 (0:00:00.127) 0:00:22.258 *********** 2025-06-02 13:11:05.701764 | orchestrator | changed: [testbed-node-4] => { 2025-06-02 13:11:05.703952 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 13:11:05.708668 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:11:05.709045 | orchestrator |  "sdb": { 2025-06-02 13:11:05.710512 | orchestrator |  "osd_lvm_uuid": "4d6dea29-b52d-558c-8900-475fd450038e" 2025-06-02 13:11:05.711306 | orchestrator |  }, 2025-06-02 13:11:05.712558 | orchestrator |  "sdc": { 2025-06-02 13:11:05.714009 | orchestrator |  "osd_lvm_uuid": "903578c2-c0cc-5204-b647-273ed346895e" 2025-06-02 13:11:05.714996 | orchestrator |  } 2025-06-02 13:11:05.717641 | orchestrator |  }, 2025-06-02 13:11:05.718246 | orchestrator |  "lvm_volumes": [ 2025-06-02 13:11:05.718867 | orchestrator |  { 2025-06-02 13:11:05.719414 | orchestrator |  "data": "osd-block-4d6dea29-b52d-558c-8900-475fd450038e", 2025-06-02 13:11:05.720593 | orchestrator |  "data_vg": "ceph-4d6dea29-b52d-558c-8900-475fd450038e" 2025-06-02 13:11:05.721056 | orchestrator |  }, 2025-06-02 13:11:05.721761 | orchestrator |  { 2025-06-02 13:11:05.722899 | orchestrator |  "data": "osd-block-903578c2-c0cc-5204-b647-273ed346895e", 2025-06-02 13:11:05.723270 | orchestrator |  "data_vg": "ceph-903578c2-c0cc-5204-b647-273ed346895e" 2025-06-02 13:11:05.724420 | orchestrator |  } 2025-06-02 13:11:05.724712 | orchestrator |  ] 2025-06-02 13:11:05.725178 | orchestrator |  } 2025-06-02 13:11:05.725751 | orchestrator | } 2025-06-02 13:11:05.726082 | orchestrator | 2025-06-02 13:11:05.726751 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 13:11:05.727690 | orchestrator | Monday 02 June 2025 13:11:05 +0000 (0:00:00.183) 0:00:22.442 *********** 2025-06-02 13:11:06.627861 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 13:11:06.630929 | orchestrator | 2025-06-02 13:11:06.632694 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 13:11:06.633707 | orchestrator | 2025-06-02 13:11:06.634673 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:11:06.635462 | orchestrator | Monday 02 June 2025 13:11:06 +0000 (0:00:00.924) 0:00:23.366 *********** 2025-06-02 13:11:06.985820 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 13:11:06.986830 | orchestrator | 2025-06-02 13:11:06.987023 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:11:06.988173 | orchestrator | Monday 02 June 2025 13:11:06 +0000 (0:00:00.357) 0:00:23.723 *********** 2025-06-02 13:11:07.438846 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:11:07.440222 | orchestrator | 2025-06-02 13:11:07.441824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:07.442815 | orchestrator | Monday 02 June 2025 13:11:07 +0000 (0:00:00.454) 0:00:24.177 *********** 2025-06-02 13:11:07.768788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 13:11:07.771318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 13:11:07.772463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 13:11:07.774005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 13:11:07.775399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 13:11:07.777355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 13:11:07.778480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 13:11:07.779951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 13:11:07.781224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 13:11:07.782667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 13:11:07.783321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 13:11:07.784503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 13:11:07.785638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 13:11:07.786850 | orchestrator | 2025-06-02 13:11:07.787638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:07.787961 | orchestrator | Monday 02 June 2025 13:11:07 +0000 (0:00:00.330) 0:00:24.508 *********** 2025-06-02 13:11:07.950146 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:07.951708 | orchestrator | 2025-06-02 13:11:07.952892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:07.953958 | orchestrator | Monday 02 June 2025 13:11:07 +0000 (0:00:00.180) 0:00:24.688 *********** 2025-06-02 13:11:08.122098 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:08.122170 | orchestrator | 2025-06-02 13:11:08.122185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:08.123081 | orchestrator | Monday 02 June 2025 13:11:08 +0000 (0:00:00.172) 0:00:24.860 *********** 2025-06-02 13:11:08.300798 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:08.301885 | orchestrator | 2025-06-02 13:11:08.302995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:08.304151 | orchestrator | Monday 02 June 2025 13:11:08 +0000 (0:00:00.179) 0:00:25.040 *********** 2025-06-02 13:11:08.480690 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:08.481792 | orchestrator | 2025-06-02 13:11:08.483454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:08.485176 | orchestrator | Monday 02 June 2025 13:11:08 +0000 (0:00:00.177) 0:00:25.217 *********** 2025-06-02 13:11:08.655085 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:08.656682 | orchestrator | 2025-06-02 13:11:08.657657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:08.659111 | orchestrator | Monday 02 June 2025 13:11:08 +0000 (0:00:00.177) 0:00:25.394 *********** 2025-06-02 13:11:08.842698 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:08.842775 | orchestrator | 2025-06-02 13:11:08.842789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:08.843402 | orchestrator | Monday 02 June 2025 13:11:08 +0000 (0:00:00.185) 0:00:25.580 *********** 2025-06-02 13:11:09.021573 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:09.023156 | orchestrator | 2025-06-02 13:11:09.024667 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:09.025764 | orchestrator | Monday 02 June 2025 13:11:09 +0000 (0:00:00.180) 0:00:25.761 *********** 2025-06-02 13:11:09.204557 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:09.204641 | orchestrator | 2025-06-02 13:11:09.204655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:09.204938 | orchestrator | Monday 02 June 2025 13:11:09 +0000 (0:00:00.180) 0:00:25.941 *********** 2025-06-02 13:11:09.767243 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e) 2025-06-02 13:11:09.767467 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e) 2025-06-02 13:11:09.768041 | orchestrator | 2025-06-02 13:11:09.772241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:09.772577 | orchestrator | Monday 02 June 2025 13:11:09 +0000 (0:00:00.563) 0:00:26.505 *********** 2025-06-02 13:11:10.480536 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de) 2025-06-02 13:11:10.480690 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de) 2025-06-02 13:11:10.481616 | orchestrator | 2025-06-02 13:11:10.482710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:10.483787 | orchestrator | Monday 02 June 2025 13:11:10 +0000 (0:00:00.711) 0:00:27.217 *********** 2025-06-02 13:11:10.874684 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855) 2025-06-02 13:11:10.874782 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855) 2025-06-02 13:11:10.877542 | orchestrator | 2025-06-02 13:11:10.877849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:10.878643 | orchestrator | Monday 02 June 2025 13:11:10 +0000 (0:00:00.395) 0:00:27.612 *********** 2025-06-02 13:11:11.287762 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da) 2025-06-02 13:11:11.289086 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da) 2025-06-02 13:11:11.291950 | orchestrator | 2025-06-02 13:11:11.291976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:11:11.291989 | orchestrator | Monday 02 June 2025 13:11:11 +0000 (0:00:00.412) 0:00:28.025 *********** 2025-06-02 13:11:11.602490 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:11:11.605867 | orchestrator | 2025-06-02 13:11:11.605908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:11.605922 | orchestrator | Monday 02 June 2025 13:11:11 +0000 (0:00:00.313) 0:00:28.339 *********** 2025-06-02 13:11:11.976833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 13:11:11.976963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 13:11:11.977531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 13:11:11.980872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 13:11:11.980897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 13:11:11.981056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 13:11:11.981564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 13:11:11.981856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 13:11:11.982211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 13:11:11.982549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 13:11:11.982872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 13:11:11.983197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 13:11:11.983533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 13:11:11.983790 | orchestrator | 2025-06-02 13:11:11.984098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:11.984575 | orchestrator | Monday 02 June 2025 13:11:11 +0000 (0:00:00.374) 0:00:28.713 *********** 2025-06-02 13:11:12.163857 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:12.164372 | orchestrator | 2025-06-02 13:11:12.166103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:12.166918 | orchestrator | Monday 02 June 2025 13:11:12 +0000 (0:00:00.186) 0:00:28.900 *********** 2025-06-02 13:11:12.353958 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:12.354110 | orchestrator | 2025-06-02 13:11:12.355872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:12.356878 | orchestrator | Monday 02 June 2025 13:11:12 +0000 (0:00:00.192) 0:00:29.092 *********** 2025-06-02 13:11:12.553534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:12.554918 | orchestrator | 2025-06-02 13:11:12.555024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:12.555262 | orchestrator | Monday 02 June 2025 13:11:12 +0000 (0:00:00.200) 0:00:29.293 *********** 2025-06-02 13:11:12.750546 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:12.750652 | orchestrator | 2025-06-02 13:11:12.750759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:12.752530 | orchestrator | Monday 02 June 2025 13:11:12 +0000 (0:00:00.196) 0:00:29.489 *********** 2025-06-02 13:11:12.947982 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:12.948355 | orchestrator | 2025-06-02 13:11:12.949350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:12.950922 | orchestrator | Monday 02 June 2025 13:11:12 +0000 (0:00:00.196) 0:00:29.686 *********** 2025-06-02 13:11:13.544563 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:13.545285 | orchestrator | 2025-06-02 13:11:13.546168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:13.547403 | orchestrator | Monday 02 June 2025 13:11:13 +0000 (0:00:00.597) 0:00:30.283 *********** 2025-06-02 13:11:13.736022 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:13.736923 | orchestrator | 2025-06-02 13:11:13.737411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:13.738144 | orchestrator | Monday 02 June 2025 13:11:13 +0000 (0:00:00.192) 0:00:30.475 *********** 2025-06-02 13:11:13.943681 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:13.943834 | orchestrator | 2025-06-02 13:11:13.944371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:13.945334 | orchestrator | Monday 02 June 2025 13:11:13 +0000 (0:00:00.206) 0:00:30.682 *********** 2025-06-02 13:11:14.610249 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 13:11:14.610550 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 13:11:14.612174 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 13:11:14.612884 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 13:11:14.613876 | orchestrator | 2025-06-02 13:11:14.614646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:14.615385 | orchestrator | Monday 02 June 2025 13:11:14 +0000 (0:00:00.667) 0:00:31.349 *********** 2025-06-02 13:11:14.819995 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:14.820193 | orchestrator | 2025-06-02 13:11:14.820852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:14.821161 | orchestrator | Monday 02 June 2025 13:11:14 +0000 (0:00:00.209) 0:00:31.559 *********** 2025-06-02 13:11:15.017970 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:15.018783 | orchestrator | 2025-06-02 13:11:15.019179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:15.020336 | orchestrator | Monday 02 June 2025 13:11:15 +0000 (0:00:00.197) 0:00:31.757 *********** 2025-06-02 13:11:15.213916 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:15.214064 | orchestrator | 2025-06-02 13:11:15.214210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:11:15.215143 | orchestrator | Monday 02 June 2025 13:11:15 +0000 (0:00:00.194) 0:00:31.951 *********** 2025-06-02 13:11:15.403870 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:15.404807 | orchestrator | 2025-06-02 13:11:15.406458 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 13:11:15.407324 | orchestrator | Monday 02 June 2025 13:11:15 +0000 (0:00:00.190) 0:00:32.142 *********** 2025-06-02 13:11:15.567790 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-02 13:11:15.567900 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-02 13:11:15.568689 | orchestrator | 2025-06-02 13:11:15.568844 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 13:11:15.569331 | orchestrator | Monday 02 June 2025 13:11:15 +0000 (0:00:00.164) 0:00:32.307 *********** 2025-06-02 13:11:15.696504 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:15.696713 | orchestrator | 2025-06-02 13:11:15.697857 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 13:11:15.698737 | orchestrator | Monday 02 June 2025 13:11:15 +0000 (0:00:00.128) 0:00:32.435 *********** 2025-06-02 13:11:15.831607 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:15.832565 | orchestrator | 2025-06-02 13:11:15.833348 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 13:11:15.834377 | orchestrator | Monday 02 June 2025 13:11:15 +0000 (0:00:00.135) 0:00:32.571 *********** 2025-06-02 13:11:15.956249 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:15.956935 | orchestrator | 2025-06-02 13:11:15.958115 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 13:11:15.959259 | orchestrator | Monday 02 June 2025 13:11:15 +0000 (0:00:00.124) 0:00:32.695 *********** 2025-06-02 13:11:16.275801 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:11:16.276540 | orchestrator | 2025-06-02 13:11:16.276571 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 13:11:16.277724 | orchestrator | Monday 02 June 2025 13:11:16 +0000 (0:00:00.318) 0:00:33.014 *********** 2025-06-02 13:11:16.442338 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e284bd18-e265-58a5-a2ab-ec21b03cc36c'}}) 2025-06-02 13:11:16.443272 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4e8c4e16-432b-566e-bc19-b5260bfeea4e'}}) 2025-06-02 13:11:16.444256 | orchestrator | 2025-06-02 13:11:16.445257 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 13:11:16.446141 | orchestrator | Monday 02 June 2025 13:11:16 +0000 (0:00:00.166) 0:00:33.180 *********** 2025-06-02 13:11:16.583537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e284bd18-e265-58a5-a2ab-ec21b03cc36c'}})  2025-06-02 13:11:16.584183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4e8c4e16-432b-566e-bc19-b5260bfeea4e'}})  2025-06-02 13:11:16.584949 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:16.585806 | orchestrator | 2025-06-02 13:11:16.586634 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 13:11:16.587394 | orchestrator | Monday 02 June 2025 13:11:16 +0000 (0:00:00.141) 0:00:33.322 *********** 2025-06-02 13:11:16.725705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e284bd18-e265-58a5-a2ab-ec21b03cc36c'}})  2025-06-02 13:11:16.726004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4e8c4e16-432b-566e-bc19-b5260bfeea4e'}})  2025-06-02 13:11:16.726931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:16.727013 | orchestrator | 2025-06-02 13:11:16.727602 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 13:11:16.728217 | orchestrator | Monday 02 June 2025 13:11:16 +0000 (0:00:00.142) 0:00:33.465 *********** 2025-06-02 13:11:16.871646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e284bd18-e265-58a5-a2ab-ec21b03cc36c'}})  2025-06-02 13:11:16.872266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4e8c4e16-432b-566e-bc19-b5260bfeea4e'}})  2025-06-02 13:11:16.873182 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:16.873898 | orchestrator | 2025-06-02 13:11:16.875527 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 13:11:16.875552 | orchestrator | Monday 02 June 2025 13:11:16 +0000 (0:00:00.145) 0:00:33.611 *********** 2025-06-02 13:11:17.006147 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:11:17.006514 | orchestrator | 2025-06-02 13:11:17.007561 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 13:11:17.008308 | orchestrator | Monday 02 June 2025 13:11:17 +0000 (0:00:00.134) 0:00:33.746 *********** 2025-06-02 13:11:17.135021 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:11:17.135149 | orchestrator | 2025-06-02 13:11:17.137851 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 13:11:17.138591 | orchestrator | Monday 02 June 2025 13:11:17 +0000 (0:00:00.127) 0:00:33.873 *********** 2025-06-02 13:11:17.267817 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:17.268191 | orchestrator | 2025-06-02 13:11:17.269086 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 13:11:17.269989 | orchestrator | Monday 02 June 2025 13:11:17 +0000 (0:00:00.134) 0:00:34.007 *********** 2025-06-02 13:11:17.409079 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:17.409256 | orchestrator | 2025-06-02 13:11:17.410571 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 13:11:17.411247 | orchestrator | Monday 02 June 2025 13:11:17 +0000 (0:00:00.141) 0:00:34.148 *********** 2025-06-02 13:11:17.543766 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:17.545647 | orchestrator | 2025-06-02 13:11:17.546288 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 13:11:17.548529 | orchestrator | Monday 02 June 2025 13:11:17 +0000 (0:00:00.134) 0:00:34.283 *********** 2025-06-02 13:11:17.686973 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:11:17.687786 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:11:17.688936 | orchestrator |  "sdb": { 2025-06-02 13:11:17.689608 | orchestrator |  "osd_lvm_uuid": "e284bd18-e265-58a5-a2ab-ec21b03cc36c" 2025-06-02 13:11:17.691102 | orchestrator |  }, 2025-06-02 13:11:17.691662 | orchestrator |  "sdc": { 2025-06-02 13:11:17.693673 | orchestrator |  "osd_lvm_uuid": "4e8c4e16-432b-566e-bc19-b5260bfeea4e" 2025-06-02 13:11:17.693906 | orchestrator |  } 2025-06-02 13:11:17.694825 | orchestrator |  } 2025-06-02 13:11:17.695573 | orchestrator | } 2025-06-02 13:11:17.696446 | orchestrator | 2025-06-02 13:11:17.697095 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 13:11:17.697839 | orchestrator | Monday 02 June 2025 13:11:17 +0000 (0:00:00.143) 0:00:34.426 *********** 2025-06-02 13:11:17.820236 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:17.821029 | orchestrator | 2025-06-02 13:11:17.823285 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 13:11:17.823379 | orchestrator | Monday 02 June 2025 13:11:17 +0000 (0:00:00.133) 0:00:34.559 *********** 2025-06-02 13:11:18.142560 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:18.143271 | orchestrator | 2025-06-02 13:11:18.144139 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 13:11:18.145732 | orchestrator | Monday 02 June 2025 13:11:18 +0000 (0:00:00.321) 0:00:34.881 *********** 2025-06-02 13:11:18.269530 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:11:18.269696 | orchestrator | 2025-06-02 13:11:18.270809 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 13:11:18.272900 | orchestrator | Monday 02 June 2025 13:11:18 +0000 (0:00:00.127) 0:00:35.008 *********** 2025-06-02 13:11:18.475463 | orchestrator | changed: [testbed-node-5] => { 2025-06-02 13:11:18.476125 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 13:11:18.478013 | orchestrator |  "ceph_osd_devices": { 2025-06-02 13:11:18.478088 | orchestrator |  "sdb": { 2025-06-02 13:11:18.478659 | orchestrator |  "osd_lvm_uuid": "e284bd18-e265-58a5-a2ab-ec21b03cc36c" 2025-06-02 13:11:18.479507 | orchestrator |  }, 2025-06-02 13:11:18.480389 | orchestrator |  "sdc": { 2025-06-02 13:11:18.481198 | orchestrator |  "osd_lvm_uuid": "4e8c4e16-432b-566e-bc19-b5260bfeea4e" 2025-06-02 13:11:18.482067 | orchestrator |  } 2025-06-02 13:11:18.482535 | orchestrator |  }, 2025-06-02 13:11:18.483396 | orchestrator |  "lvm_volumes": [ 2025-06-02 13:11:18.484132 | orchestrator |  { 2025-06-02 13:11:18.484616 | orchestrator |  "data": "osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c", 2025-06-02 13:11:18.485658 | orchestrator |  "data_vg": "ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c" 2025-06-02 13:11:18.485855 | orchestrator |  }, 2025-06-02 13:11:18.486662 | orchestrator |  { 2025-06-02 13:11:18.487628 | orchestrator |  "data": "osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e", 2025-06-02 13:11:18.488297 | orchestrator |  "data_vg": "ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e" 2025-06-02 13:11:18.489347 | orchestrator |  } 2025-06-02 13:11:18.489803 | orchestrator |  ] 2025-06-02 13:11:18.490260 | orchestrator |  } 2025-06-02 13:11:18.491338 | orchestrator | } 2025-06-02 13:11:18.492154 | orchestrator | 2025-06-02 13:11:18.492592 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 13:11:18.493218 | orchestrator | Monday 02 June 2025 13:11:18 +0000 (0:00:00.205) 0:00:35.213 *********** 2025-06-02 13:11:19.400720 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 13:11:19.401787 | orchestrator | 2025-06-02 13:11:19.402224 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:11:19.402889 | orchestrator | 2025-06-02 13:11:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:11:19.403309 | orchestrator | 2025-06-02 13:11:19 | INFO  | Please wait and do not abort execution. 2025-06-02 13:11:19.404346 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 13:11:19.405123 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 13:11:19.406160 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 13:11:19.406937 | orchestrator | 2025-06-02 13:11:19.407884 | orchestrator | 2025-06-02 13:11:19.408313 | orchestrator | 2025-06-02 13:11:19.409136 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:11:19.409678 | orchestrator | Monday 02 June 2025 13:11:19 +0000 (0:00:00.926) 0:00:36.140 *********** 2025-06-02 13:11:19.410635 | orchestrator | =============================================================================== 2025-06-02 13:11:19.411338 | orchestrator | Write configuration file ------------------------------------------------ 3.69s 2025-06-02 13:11:19.412489 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2025-06-02 13:11:19.412874 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-06-02 13:11:19.413400 | orchestrator | Get initial list of available block devices ----------------------------- 0.87s 2025-06-02 13:11:19.413860 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2025-06-02 13:11:19.414543 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2025-06-02 13:11:19.414957 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-06-02 13:11:19.416550 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-06-02 13:11:19.417082 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-06-02 13:11:19.417625 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.58s 2025-06-02 13:11:19.418094 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.57s 2025-06-02 13:11:19.418559 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-06-02 13:11:19.419198 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-06-02 13:11:19.419603 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-06-02 13:11:19.420310 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.56s 2025-06-02 13:11:19.420699 | orchestrator | Print DB devices -------------------------------------------------------- 0.54s 2025-06-02 13:11:19.421097 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-06-02 13:11:19.421501 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2025-06-02 13:11:19.421933 | orchestrator | Add known partitions to the list of available block devices ------------- 0.50s 2025-06-02 13:11:19.422288 | orchestrator | Set WAL devices config data --------------------------------------------- 0.49s 2025-06-02 13:11:31.679968 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:11:31.680084 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:11:31.680099 | orchestrator | Registering Redlock._release_script 2025-06-02 13:11:31.741939 | orchestrator | 2025-06-02 13:11:31 | INFO  | Task 1a8d3403-aaad-4c4f-a4d2-b6d81c97ea92 (sync inventory) is running in background. Output coming soon. 2025-06-02 13:12:13.506172 | orchestrator | 2025-06-02 13:11:56 | INFO  | Starting group_vars file reorganization 2025-06-02 13:12:13.506274 | orchestrator | 2025-06-02 13:11:56 | INFO  | Moved 0 file(s) to their respective directories 2025-06-02 13:12:13.506289 | orchestrator | 2025-06-02 13:11:56 | INFO  | Group_vars file reorganization completed 2025-06-02 13:12:13.506301 | orchestrator | 2025-06-02 13:11:58 | INFO  | Starting variable preparation from inventory 2025-06-02 13:12:13.506312 | orchestrator | 2025-06-02 13:12:00 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-02 13:12:13.506323 | orchestrator | 2025-06-02 13:12:00 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-02 13:12:13.506355 | orchestrator | 2025-06-02 13:12:00 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-02 13:12:13.506367 | orchestrator | 2025-06-02 13:12:00 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-02 13:12:13.506378 | orchestrator | 2025-06-02 13:12:00 | INFO  | Variable preparation completed: 2025-06-02 13:12:13.506432 | orchestrator | 2025-06-02 13:12:01 | INFO  | Starting inventory overwrite handling 2025-06-02 13:12:13.506443 | orchestrator | 2025-06-02 13:12:01 | INFO  | Handling group overwrites in 99-overwrite 2025-06-02 13:12:13.506454 | orchestrator | 2025-06-02 13:12:01 | INFO  | Removing group frr:children from 60-generic 2025-06-02 13:12:13.506464 | orchestrator | 2025-06-02 13:12:01 | INFO  | Removing group storage:children from 50-kolla 2025-06-02 13:12:13.506475 | orchestrator | 2025-06-02 13:12:01 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-02 13:12:13.506494 | orchestrator | 2025-06-02 13:12:01 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-02 13:12:13.506505 | orchestrator | 2025-06-02 13:12:01 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-02 13:12:13.506516 | orchestrator | 2025-06-02 13:12:01 | INFO  | Handling group overwrites in 20-roles 2025-06-02 13:12:13.506526 | orchestrator | 2025-06-02 13:12:01 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-02 13:12:13.506537 | orchestrator | 2025-06-02 13:12:01 | INFO  | Removed 6 group(s) in total 2025-06-02 13:12:13.506548 | orchestrator | 2025-06-02 13:12:01 | INFO  | Inventory overwrite handling completed 2025-06-02 13:12:13.506559 | orchestrator | 2025-06-02 13:12:02 | INFO  | Starting merge of inventory files 2025-06-02 13:12:13.506569 | orchestrator | 2025-06-02 13:12:02 | INFO  | Inventory files merged successfully 2025-06-02 13:12:13.506580 | orchestrator | 2025-06-02 13:12:06 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-02 13:12:13.506591 | orchestrator | 2025-06-02 13:12:12 | INFO  | Successfully wrote ClusterShell configuration 2025-06-02 13:12:15.291582 | orchestrator | 2025-06-02 13:12:15 | INFO  | Task 96ce1b5e-339c-482b-9533-a0b75cb45791 (ceph-create-lvm-devices) was prepared for execution. 2025-06-02 13:12:15.291691 | orchestrator | 2025-06-02 13:12:15 | INFO  | It takes a moment until task 96ce1b5e-339c-482b-9533-a0b75cb45791 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-02 13:12:19.324601 | orchestrator | 2025-06-02 13:12:19.324829 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 13:12:19.325611 | orchestrator | 2025-06-02 13:12:19.327943 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:12:19.329197 | orchestrator | Monday 02 June 2025 13:12:19 +0000 (0:00:00.305) 0:00:00.305 *********** 2025-06-02 13:12:19.553642 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:12:19.553840 | orchestrator | 2025-06-02 13:12:19.554810 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:12:19.555732 | orchestrator | Monday 02 June 2025 13:12:19 +0000 (0:00:00.230) 0:00:00.536 *********** 2025-06-02 13:12:19.770855 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:19.771316 | orchestrator | 2025-06-02 13:12:19.772378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:19.773035 | orchestrator | Monday 02 June 2025 13:12:19 +0000 (0:00:00.218) 0:00:00.754 *********** 2025-06-02 13:12:20.158278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 13:12:20.158435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 13:12:20.158452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 13:12:20.158495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 13:12:20.158676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 13:12:20.159089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 13:12:20.159604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 13:12:20.160009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 13:12:20.160512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 13:12:20.160998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 13:12:20.161594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 13:12:20.162482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 13:12:20.163617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 13:12:20.164105 | orchestrator | 2025-06-02 13:12:20.164800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:20.165836 | orchestrator | Monday 02 June 2025 13:12:20 +0000 (0:00:00.381) 0:00:01.135 *********** 2025-06-02 13:12:20.607665 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:20.607797 | orchestrator | 2025-06-02 13:12:20.607883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:20.607901 | orchestrator | Monday 02 June 2025 13:12:20 +0000 (0:00:00.452) 0:00:01.588 *********** 2025-06-02 13:12:20.787788 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:20.787888 | orchestrator | 2025-06-02 13:12:20.787903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:20.787917 | orchestrator | Monday 02 June 2025 13:12:20 +0000 (0:00:00.178) 0:00:01.767 *********** 2025-06-02 13:12:20.983959 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:20.984159 | orchestrator | 2025-06-02 13:12:20.985815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:20.987066 | orchestrator | Monday 02 June 2025 13:12:20 +0000 (0:00:00.197) 0:00:01.965 *********** 2025-06-02 13:12:21.168958 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:21.170060 | orchestrator | 2025-06-02 13:12:21.171031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:21.172921 | orchestrator | Monday 02 June 2025 13:12:21 +0000 (0:00:00.186) 0:00:02.152 *********** 2025-06-02 13:12:21.349285 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:21.350719 | orchestrator | 2025-06-02 13:12:21.351845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:21.353587 | orchestrator | Monday 02 June 2025 13:12:21 +0000 (0:00:00.180) 0:00:02.332 *********** 2025-06-02 13:12:21.544896 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:21.548668 | orchestrator | 2025-06-02 13:12:21.548702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:21.548716 | orchestrator | Monday 02 June 2025 13:12:21 +0000 (0:00:00.193) 0:00:02.526 *********** 2025-06-02 13:12:21.744678 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:21.745583 | orchestrator | 2025-06-02 13:12:21.746999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:21.750685 | orchestrator | Monday 02 June 2025 13:12:21 +0000 (0:00:00.201) 0:00:02.727 *********** 2025-06-02 13:12:21.938145 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:21.938411 | orchestrator | 2025-06-02 13:12:21.939494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:21.940705 | orchestrator | Monday 02 June 2025 13:12:21 +0000 (0:00:00.192) 0:00:02.920 *********** 2025-06-02 13:12:22.336488 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb) 2025-06-02 13:12:22.336950 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb) 2025-06-02 13:12:22.338399 | orchestrator | 2025-06-02 13:12:22.341757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:22.341847 | orchestrator | Monday 02 June 2025 13:12:22 +0000 (0:00:00.398) 0:00:03.318 *********** 2025-06-02 13:12:22.716361 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff) 2025-06-02 13:12:22.718731 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff) 2025-06-02 13:12:22.718786 | orchestrator | 2025-06-02 13:12:22.718800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:22.719656 | orchestrator | Monday 02 June 2025 13:12:22 +0000 (0:00:00.379) 0:00:03.697 *********** 2025-06-02 13:12:23.276975 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5) 2025-06-02 13:12:23.277883 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5) 2025-06-02 13:12:23.278891 | orchestrator | 2025-06-02 13:12:23.280074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:23.281021 | orchestrator | Monday 02 June 2025 13:12:23 +0000 (0:00:00.561) 0:00:04.259 *********** 2025-06-02 13:12:23.862733 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4) 2025-06-02 13:12:23.863617 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4) 2025-06-02 13:12:23.865092 | orchestrator | 2025-06-02 13:12:23.865994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:23.867464 | orchestrator | Monday 02 June 2025 13:12:23 +0000 (0:00:00.584) 0:00:04.844 *********** 2025-06-02 13:12:24.568370 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:12:24.570149 | orchestrator | 2025-06-02 13:12:24.570301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:24.571312 | orchestrator | Monday 02 June 2025 13:12:24 +0000 (0:00:00.705) 0:00:05.549 *********** 2025-06-02 13:12:24.979568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 13:12:24.981462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 13:12:24.985839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 13:12:24.986815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 13:12:24.987711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 13:12:24.988357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 13:12:24.989054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 13:12:24.991684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 13:12:24.992254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 13:12:24.993007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 13:12:24.996106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 13:12:24.996688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 13:12:24.997319 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 13:12:25.000654 | orchestrator | 2025-06-02 13:12:25.001243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:25.001896 | orchestrator | Monday 02 June 2025 13:12:24 +0000 (0:00:00.412) 0:00:05.962 *********** 2025-06-02 13:12:25.167944 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:25.169582 | orchestrator | 2025-06-02 13:12:25.169617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:25.170466 | orchestrator | Monday 02 June 2025 13:12:25 +0000 (0:00:00.189) 0:00:06.151 *********** 2025-06-02 13:12:25.358587 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:25.359346 | orchestrator | 2025-06-02 13:12:25.361462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:25.365046 | orchestrator | Monday 02 June 2025 13:12:25 +0000 (0:00:00.190) 0:00:06.342 *********** 2025-06-02 13:12:25.560755 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:25.560934 | orchestrator | 2025-06-02 13:12:25.561874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:25.562459 | orchestrator | Monday 02 June 2025 13:12:25 +0000 (0:00:00.199) 0:00:06.541 *********** 2025-06-02 13:12:25.747791 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:25.747909 | orchestrator | 2025-06-02 13:12:25.748012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:25.752629 | orchestrator | Monday 02 June 2025 13:12:25 +0000 (0:00:00.189) 0:00:06.731 *********** 2025-06-02 13:12:25.941928 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:25.942181 | orchestrator | 2025-06-02 13:12:25.945205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:25.945365 | orchestrator | Monday 02 June 2025 13:12:25 +0000 (0:00:00.195) 0:00:06.926 *********** 2025-06-02 13:12:26.162313 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:26.163110 | orchestrator | 2025-06-02 13:12:26.164058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:26.165349 | orchestrator | Monday 02 June 2025 13:12:26 +0000 (0:00:00.219) 0:00:07.145 *********** 2025-06-02 13:12:26.350103 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:26.351140 | orchestrator | 2025-06-02 13:12:26.352092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:26.352680 | orchestrator | Monday 02 June 2025 13:12:26 +0000 (0:00:00.187) 0:00:07.333 *********** 2025-06-02 13:12:26.547328 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:26.547653 | orchestrator | 2025-06-02 13:12:26.548737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:26.549847 | orchestrator | Monday 02 June 2025 13:12:26 +0000 (0:00:00.195) 0:00:07.529 *********** 2025-06-02 13:12:27.567946 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 13:12:27.568293 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 13:12:27.571890 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 13:12:27.572756 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 13:12:27.573539 | orchestrator | 2025-06-02 13:12:27.574560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:27.575938 | orchestrator | Monday 02 June 2025 13:12:27 +0000 (0:00:01.020) 0:00:08.549 *********** 2025-06-02 13:12:27.790481 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:27.790888 | orchestrator | 2025-06-02 13:12:27.791825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:27.792969 | orchestrator | Monday 02 June 2025 13:12:27 +0000 (0:00:00.225) 0:00:08.774 *********** 2025-06-02 13:12:27.993154 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:27.993288 | orchestrator | 2025-06-02 13:12:27.994387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:27.995148 | orchestrator | Monday 02 June 2025 13:12:27 +0000 (0:00:00.201) 0:00:08.975 *********** 2025-06-02 13:12:28.203832 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:28.203962 | orchestrator | 2025-06-02 13:12:28.204054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:28.204605 | orchestrator | Monday 02 June 2025 13:12:28 +0000 (0:00:00.211) 0:00:09.187 *********** 2025-06-02 13:12:28.400638 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:28.401001 | orchestrator | 2025-06-02 13:12:28.402352 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 13:12:28.405629 | orchestrator | Monday 02 June 2025 13:12:28 +0000 (0:00:00.196) 0:00:09.383 *********** 2025-06-02 13:12:28.536646 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:28.539016 | orchestrator | 2025-06-02 13:12:28.539483 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 13:12:28.540093 | orchestrator | Monday 02 June 2025 13:12:28 +0000 (0:00:00.131) 0:00:09.514 *********** 2025-06-02 13:12:28.712165 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}}) 2025-06-02 13:12:28.712691 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}}) 2025-06-02 13:12:28.713548 | orchestrator | 2025-06-02 13:12:28.713919 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 13:12:28.714977 | orchestrator | Monday 02 June 2025 13:12:28 +0000 (0:00:00.181) 0:00:09.696 *********** 2025-06-02 13:12:30.705249 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}) 2025-06-02 13:12:30.706287 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}) 2025-06-02 13:12:30.707027 | orchestrator | 2025-06-02 13:12:30.708083 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 13:12:30.709050 | orchestrator | Monday 02 June 2025 13:12:30 +0000 (0:00:01.991) 0:00:11.687 *********** 2025-06-02 13:12:30.849427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:30.850133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:30.850653 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:30.851487 | orchestrator | 2025-06-02 13:12:30.855724 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 13:12:30.855820 | orchestrator | Monday 02 June 2025 13:12:30 +0000 (0:00:00.145) 0:00:11.833 *********** 2025-06-02 13:12:32.250507 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}) 2025-06-02 13:12:32.250719 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}) 2025-06-02 13:12:32.251621 | orchestrator | 2025-06-02 13:12:32.252518 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 13:12:32.255794 | orchestrator | Monday 02 June 2025 13:12:32 +0000 (0:00:01.400) 0:00:13.233 *********** 2025-06-02 13:12:32.400218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:32.401054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:32.402353 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:32.405778 | orchestrator | 2025-06-02 13:12:32.405810 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 13:12:32.405822 | orchestrator | Monday 02 June 2025 13:12:32 +0000 (0:00:00.150) 0:00:13.383 *********** 2025-06-02 13:12:32.533801 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:32.537205 | orchestrator | 2025-06-02 13:12:32.537878 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 13:12:32.538509 | orchestrator | Monday 02 June 2025 13:12:32 +0000 (0:00:00.130) 0:00:13.513 *********** 2025-06-02 13:12:32.851517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:32.852213 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:32.853552 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:32.854522 | orchestrator | 2025-06-02 13:12:32.856793 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 13:12:32.856815 | orchestrator | Monday 02 June 2025 13:12:32 +0000 (0:00:00.319) 0:00:13.833 *********** 2025-06-02 13:12:32.976796 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:32.977564 | orchestrator | 2025-06-02 13:12:32.978426 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 13:12:32.979735 | orchestrator | Monday 02 June 2025 13:12:32 +0000 (0:00:00.127) 0:00:13.960 *********** 2025-06-02 13:12:33.109980 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:33.110514 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:33.111210 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:33.113219 | orchestrator | 2025-06-02 13:12:33.113506 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 13:12:33.114411 | orchestrator | Monday 02 June 2025 13:12:33 +0000 (0:00:00.132) 0:00:14.093 *********** 2025-06-02 13:12:33.234190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:33.235229 | orchestrator | 2025-06-02 13:12:33.235780 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 13:12:33.236699 | orchestrator | Monday 02 June 2025 13:12:33 +0000 (0:00:00.124) 0:00:14.217 *********** 2025-06-02 13:12:33.370249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:33.371199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:33.371923 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:33.372759 | orchestrator | 2025-06-02 13:12:33.373488 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 13:12:33.374257 | orchestrator | Monday 02 June 2025 13:12:33 +0000 (0:00:00.135) 0:00:14.353 *********** 2025-06-02 13:12:33.496769 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:33.497530 | orchestrator | 2025-06-02 13:12:33.503555 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 13:12:33.503597 | orchestrator | Monday 02 June 2025 13:12:33 +0000 (0:00:00.126) 0:00:14.480 *********** 2025-06-02 13:12:33.636034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:33.636911 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:33.637725 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:33.638672 | orchestrator | 2025-06-02 13:12:33.639722 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 13:12:33.640099 | orchestrator | Monday 02 June 2025 13:12:33 +0000 (0:00:00.138) 0:00:14.619 *********** 2025-06-02 13:12:33.771880 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:33.772739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:33.776176 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:33.776218 | orchestrator | 2025-06-02 13:12:33.776231 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 13:12:33.776363 | orchestrator | Monday 02 June 2025 13:12:33 +0000 (0:00:00.137) 0:00:14.756 *********** 2025-06-02 13:12:33.901319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:33.902154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:33.903038 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:33.903938 | orchestrator | 2025-06-02 13:12:33.905713 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 13:12:33.907531 | orchestrator | Monday 02 June 2025 13:12:33 +0000 (0:00:00.128) 0:00:14.884 *********** 2025-06-02 13:12:34.029121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:34.029767 | orchestrator | 2025-06-02 13:12:34.030793 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 13:12:34.031634 | orchestrator | Monday 02 June 2025 13:12:34 +0000 (0:00:00.126) 0:00:15.011 *********** 2025-06-02 13:12:34.150811 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:34.151537 | orchestrator | 2025-06-02 13:12:34.152450 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 13:12:34.153355 | orchestrator | Monday 02 June 2025 13:12:34 +0000 (0:00:00.123) 0:00:15.135 *********** 2025-06-02 13:12:34.274252 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:34.275049 | orchestrator | 2025-06-02 13:12:34.276291 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 13:12:34.277073 | orchestrator | Monday 02 June 2025 13:12:34 +0000 (0:00:00.122) 0:00:15.258 *********** 2025-06-02 13:12:34.494444 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:12:34.494579 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 13:12:34.495274 | orchestrator | } 2025-06-02 13:12:34.496168 | orchestrator | 2025-06-02 13:12:34.497099 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 13:12:34.498512 | orchestrator | Monday 02 June 2025 13:12:34 +0000 (0:00:00.219) 0:00:15.477 *********** 2025-06-02 13:12:34.617575 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:12:34.618816 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 13:12:34.620074 | orchestrator | } 2025-06-02 13:12:34.623136 | orchestrator | 2025-06-02 13:12:34.623945 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 13:12:34.624718 | orchestrator | Monday 02 June 2025 13:12:34 +0000 (0:00:00.124) 0:00:15.601 *********** 2025-06-02 13:12:34.744698 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:12:34.747961 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 13:12:34.748887 | orchestrator | } 2025-06-02 13:12:34.749417 | orchestrator | 2025-06-02 13:12:34.751464 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 13:12:34.751877 | orchestrator | Monday 02 June 2025 13:12:34 +0000 (0:00:00.125) 0:00:15.727 *********** 2025-06-02 13:12:35.380257 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:35.380347 | orchestrator | 2025-06-02 13:12:35.380580 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 13:12:35.381885 | orchestrator | Monday 02 June 2025 13:12:35 +0000 (0:00:00.633) 0:00:16.360 *********** 2025-06-02 13:12:35.863525 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:35.866761 | orchestrator | 2025-06-02 13:12:35.866798 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 13:12:35.866812 | orchestrator | Monday 02 June 2025 13:12:35 +0000 (0:00:00.483) 0:00:16.844 *********** 2025-06-02 13:12:36.359612 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:36.359696 | orchestrator | 2025-06-02 13:12:36.359711 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 13:12:36.360610 | orchestrator | Monday 02 June 2025 13:12:36 +0000 (0:00:00.497) 0:00:17.341 *********** 2025-06-02 13:12:36.484635 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:36.485169 | orchestrator | 2025-06-02 13:12:36.488884 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 13:12:36.489294 | orchestrator | Monday 02 June 2025 13:12:36 +0000 (0:00:00.125) 0:00:17.467 *********** 2025-06-02 13:12:36.568925 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:36.569502 | orchestrator | 2025-06-02 13:12:36.570148 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 13:12:36.570854 | orchestrator | Monday 02 June 2025 13:12:36 +0000 (0:00:00.085) 0:00:17.553 *********** 2025-06-02 13:12:36.669979 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:36.670571 | orchestrator | 2025-06-02 13:12:36.672504 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 13:12:36.675478 | orchestrator | Monday 02 June 2025 13:12:36 +0000 (0:00:00.098) 0:00:17.651 *********** 2025-06-02 13:12:36.802091 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:12:36.802954 | orchestrator |  "vgs_report": { 2025-06-02 13:12:36.803785 | orchestrator |  "vg": [] 2025-06-02 13:12:36.806598 | orchestrator |  } 2025-06-02 13:12:36.806624 | orchestrator | } 2025-06-02 13:12:36.807031 | orchestrator | 2025-06-02 13:12:36.807705 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 13:12:36.808437 | orchestrator | Monday 02 June 2025 13:12:36 +0000 (0:00:00.134) 0:00:17.786 *********** 2025-06-02 13:12:36.928006 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:36.929356 | orchestrator | 2025-06-02 13:12:36.930068 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 13:12:36.933512 | orchestrator | Monday 02 June 2025 13:12:36 +0000 (0:00:00.126) 0:00:17.912 *********** 2025-06-02 13:12:37.051062 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.051234 | orchestrator | 2025-06-02 13:12:37.054749 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 13:12:37.054791 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.121) 0:00:18.034 *********** 2025-06-02 13:12:37.275609 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.276217 | orchestrator | 2025-06-02 13:12:37.279760 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 13:12:37.279807 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.224) 0:00:18.259 *********** 2025-06-02 13:12:37.379003 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.379632 | orchestrator | 2025-06-02 13:12:37.380699 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 13:12:37.381734 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.103) 0:00:18.363 *********** 2025-06-02 13:12:37.504998 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.505556 | orchestrator | 2025-06-02 13:12:37.508673 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 13:12:37.508699 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.125) 0:00:18.488 *********** 2025-06-02 13:12:37.617935 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.618577 | orchestrator | 2025-06-02 13:12:37.621584 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 13:12:37.621609 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.112) 0:00:18.601 *********** 2025-06-02 13:12:37.731470 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.732222 | orchestrator | 2025-06-02 13:12:37.732840 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 13:12:37.733594 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.112) 0:00:18.714 *********** 2025-06-02 13:12:37.851887 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.852423 | orchestrator | 2025-06-02 13:12:37.853566 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 13:12:37.854088 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.121) 0:00:18.836 *********** 2025-06-02 13:12:37.972623 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:37.973566 | orchestrator | 2025-06-02 13:12:37.974744 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 13:12:37.975557 | orchestrator | Monday 02 June 2025 13:12:37 +0000 (0:00:00.120) 0:00:18.956 *********** 2025-06-02 13:12:38.085944 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:38.086426 | orchestrator | 2025-06-02 13:12:38.087428 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 13:12:38.087936 | orchestrator | Monday 02 June 2025 13:12:38 +0000 (0:00:00.113) 0:00:19.069 *********** 2025-06-02 13:12:38.194460 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:38.194529 | orchestrator | 2025-06-02 13:12:38.195505 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 13:12:38.196555 | orchestrator | Monday 02 June 2025 13:12:38 +0000 (0:00:00.107) 0:00:19.177 *********** 2025-06-02 13:12:38.312180 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:38.313645 | orchestrator | 2025-06-02 13:12:38.314005 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 13:12:38.315175 | orchestrator | Monday 02 June 2025 13:12:38 +0000 (0:00:00.118) 0:00:19.296 *********** 2025-06-02 13:12:38.427227 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:38.427460 | orchestrator | 2025-06-02 13:12:38.428397 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 13:12:38.429007 | orchestrator | Monday 02 June 2025 13:12:38 +0000 (0:00:00.114) 0:00:19.411 *********** 2025-06-02 13:12:38.547147 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:38.547518 | orchestrator | 2025-06-02 13:12:38.548550 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 13:12:38.549233 | orchestrator | Monday 02 June 2025 13:12:38 +0000 (0:00:00.120) 0:00:19.531 *********** 2025-06-02 13:12:38.682520 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:38.683699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:38.684179 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:38.684837 | orchestrator | 2025-06-02 13:12:38.685978 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 13:12:38.687066 | orchestrator | Monday 02 June 2025 13:12:38 +0000 (0:00:00.135) 0:00:19.666 *********** 2025-06-02 13:12:39.017885 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:39.020248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:39.020968 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:39.021569 | orchestrator | 2025-06-02 13:12:39.022327 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 13:12:39.022917 | orchestrator | Monday 02 June 2025 13:12:39 +0000 (0:00:00.332) 0:00:19.999 *********** 2025-06-02 13:12:39.168302 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:39.169150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:39.170425 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:39.171162 | orchestrator | 2025-06-02 13:12:39.171676 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 13:12:39.172548 | orchestrator | Monday 02 June 2025 13:12:39 +0000 (0:00:00.152) 0:00:20.151 *********** 2025-06-02 13:12:39.311551 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:39.311702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:39.311787 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:39.311981 | orchestrator | 2025-06-02 13:12:39.312580 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 13:12:39.313032 | orchestrator | Monday 02 June 2025 13:12:39 +0000 (0:00:00.143) 0:00:20.294 *********** 2025-06-02 13:12:39.452490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:39.452744 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:39.453801 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:39.454244 | orchestrator | 2025-06-02 13:12:39.456211 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 13:12:39.456235 | orchestrator | Monday 02 June 2025 13:12:39 +0000 (0:00:00.140) 0:00:20.435 *********** 2025-06-02 13:12:39.601230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:39.601526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:39.602858 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:39.603459 | orchestrator | 2025-06-02 13:12:39.604229 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 13:12:39.605569 | orchestrator | Monday 02 June 2025 13:12:39 +0000 (0:00:00.148) 0:00:20.584 *********** 2025-06-02 13:12:39.758490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:39.759289 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:39.759595 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:39.761111 | orchestrator | 2025-06-02 13:12:39.761746 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 13:12:39.762404 | orchestrator | Monday 02 June 2025 13:12:39 +0000 (0:00:00.156) 0:00:20.740 *********** 2025-06-02 13:12:39.906936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:39.907034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:39.907972 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:39.908600 | orchestrator | 2025-06-02 13:12:39.909847 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 13:12:39.910485 | orchestrator | Monday 02 June 2025 13:12:39 +0000 (0:00:00.148) 0:00:20.889 *********** 2025-06-02 13:12:40.410253 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:40.410583 | orchestrator | 2025-06-02 13:12:40.411075 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 13:12:40.411700 | orchestrator | Monday 02 June 2025 13:12:40 +0000 (0:00:00.503) 0:00:21.393 *********** 2025-06-02 13:12:40.901539 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:40.902223 | orchestrator | 2025-06-02 13:12:40.903345 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 13:12:40.904947 | orchestrator | Monday 02 June 2025 13:12:40 +0000 (0:00:00.490) 0:00:21.883 *********** 2025-06-02 13:12:41.047664 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:12:41.048757 | orchestrator | 2025-06-02 13:12:41.049754 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 13:12:41.051831 | orchestrator | Monday 02 June 2025 13:12:41 +0000 (0:00:00.146) 0:00:22.030 *********** 2025-06-02 13:12:41.200435 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'vg_name': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}) 2025-06-02 13:12:41.200533 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'vg_name': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}) 2025-06-02 13:12:41.200547 | orchestrator | 2025-06-02 13:12:41.200770 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 13:12:41.201738 | orchestrator | Monday 02 June 2025 13:12:41 +0000 (0:00:00.152) 0:00:22.183 *********** 2025-06-02 13:12:41.344299 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:41.345628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:41.346667 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:41.347222 | orchestrator | 2025-06-02 13:12:41.348225 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 13:12:41.349443 | orchestrator | Monday 02 June 2025 13:12:41 +0000 (0:00:00.143) 0:00:22.326 *********** 2025-06-02 13:12:41.691224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:41.691422 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:41.691938 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:41.692343 | orchestrator | 2025-06-02 13:12:41.693029 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 13:12:41.693299 | orchestrator | Monday 02 June 2025 13:12:41 +0000 (0:00:00.348) 0:00:22.675 *********** 2025-06-02 13:12:41.845234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'})  2025-06-02 13:12:41.846231 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'})  2025-06-02 13:12:41.847348 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:12:41.847758 | orchestrator | 2025-06-02 13:12:41.848752 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 13:12:41.849415 | orchestrator | Monday 02 June 2025 13:12:41 +0000 (0:00:00.151) 0:00:22.827 *********** 2025-06-02 13:12:42.146088 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:12:42.147566 | orchestrator |  "lvm_report": { 2025-06-02 13:12:42.148638 | orchestrator |  "lv": [ 2025-06-02 13:12:42.150914 | orchestrator |  { 2025-06-02 13:12:42.151066 | orchestrator |  "lv_name": "osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e", 2025-06-02 13:12:42.151887 | orchestrator |  "vg_name": "ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e" 2025-06-02 13:12:42.152763 | orchestrator |  }, 2025-06-02 13:12:42.153571 | orchestrator |  { 2025-06-02 13:12:42.154469 | orchestrator |  "lv_name": "osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb", 2025-06-02 13:12:42.154996 | orchestrator |  "vg_name": "ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb" 2025-06-02 13:12:42.155873 | orchestrator |  } 2025-06-02 13:12:42.156224 | orchestrator |  ], 2025-06-02 13:12:42.156822 | orchestrator |  "pv": [ 2025-06-02 13:12:42.157235 | orchestrator |  { 2025-06-02 13:12:42.157706 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 13:12:42.158167 | orchestrator |  "vg_name": "ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e" 2025-06-02 13:12:42.158626 | orchestrator |  }, 2025-06-02 13:12:42.159023 | orchestrator |  { 2025-06-02 13:12:42.159505 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 13:12:42.159913 | orchestrator |  "vg_name": "ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb" 2025-06-02 13:12:42.160397 | orchestrator |  } 2025-06-02 13:12:42.160834 | orchestrator |  ] 2025-06-02 13:12:42.161304 | orchestrator |  } 2025-06-02 13:12:42.161649 | orchestrator | } 2025-06-02 13:12:42.162110 | orchestrator | 2025-06-02 13:12:42.162480 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 13:12:42.163035 | orchestrator | 2025-06-02 13:12:42.163331 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:12:42.163726 | orchestrator | Monday 02 June 2025 13:12:42 +0000 (0:00:00.300) 0:00:23.128 *********** 2025-06-02 13:12:42.396913 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 13:12:42.397013 | orchestrator | 2025-06-02 13:12:42.397784 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:12:42.398083 | orchestrator | Monday 02 June 2025 13:12:42 +0000 (0:00:00.250) 0:00:23.379 *********** 2025-06-02 13:12:42.627460 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:12:42.627664 | orchestrator | 2025-06-02 13:12:42.628741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:42.629791 | orchestrator | Monday 02 June 2025 13:12:42 +0000 (0:00:00.231) 0:00:23.610 *********** 2025-06-02 13:12:43.040464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 13:12:43.041781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 13:12:43.043090 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 13:12:43.043116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 13:12:43.043852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 13:12:43.044730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 13:12:43.045269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 13:12:43.045900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 13:12:43.046492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 13:12:43.046994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 13:12:43.047518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 13:12:43.048231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 13:12:43.048708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 13:12:43.049441 | orchestrator | 2025-06-02 13:12:43.049732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:43.050208 | orchestrator | Monday 02 June 2025 13:12:43 +0000 (0:00:00.412) 0:00:24.022 *********** 2025-06-02 13:12:43.229998 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:43.230155 | orchestrator | 2025-06-02 13:12:43.230703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:43.231088 | orchestrator | Monday 02 June 2025 13:12:43 +0000 (0:00:00.190) 0:00:24.213 *********** 2025-06-02 13:12:43.423871 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:43.424498 | orchestrator | 2025-06-02 13:12:43.426097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:43.426563 | orchestrator | Monday 02 June 2025 13:12:43 +0000 (0:00:00.191) 0:00:24.404 *********** 2025-06-02 13:12:43.601949 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:43.602242 | orchestrator | 2025-06-02 13:12:43.602771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:43.603502 | orchestrator | Monday 02 June 2025 13:12:43 +0000 (0:00:00.178) 0:00:24.583 *********** 2025-06-02 13:12:44.156930 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:44.157345 | orchestrator | 2025-06-02 13:12:44.157682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:44.158600 | orchestrator | Monday 02 June 2025 13:12:44 +0000 (0:00:00.555) 0:00:25.139 *********** 2025-06-02 13:12:44.365418 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:44.366861 | orchestrator | 2025-06-02 13:12:44.366908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:44.366918 | orchestrator | Monday 02 June 2025 13:12:44 +0000 (0:00:00.209) 0:00:25.348 *********** 2025-06-02 13:12:44.599258 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:44.599975 | orchestrator | 2025-06-02 13:12:44.600220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:44.601046 | orchestrator | Monday 02 June 2025 13:12:44 +0000 (0:00:00.233) 0:00:25.582 *********** 2025-06-02 13:12:44.798753 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:44.799852 | orchestrator | 2025-06-02 13:12:44.800779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:44.801590 | orchestrator | Monday 02 June 2025 13:12:44 +0000 (0:00:00.199) 0:00:25.781 *********** 2025-06-02 13:12:44.990706 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:44.990932 | orchestrator | 2025-06-02 13:12:44.991711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:44.992467 | orchestrator | Monday 02 June 2025 13:12:44 +0000 (0:00:00.190) 0:00:25.972 *********** 2025-06-02 13:12:45.376128 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd) 2025-06-02 13:12:45.376532 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd) 2025-06-02 13:12:45.377730 | orchestrator | 2025-06-02 13:12:45.379115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:45.380812 | orchestrator | Monday 02 June 2025 13:12:45 +0000 (0:00:00.386) 0:00:26.359 *********** 2025-06-02 13:12:45.790315 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27) 2025-06-02 13:12:45.790570 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27) 2025-06-02 13:12:45.791187 | orchestrator | 2025-06-02 13:12:45.791978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:45.792326 | orchestrator | Monday 02 June 2025 13:12:45 +0000 (0:00:00.413) 0:00:26.773 *********** 2025-06-02 13:12:46.201277 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5) 2025-06-02 13:12:46.202148 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5) 2025-06-02 13:12:46.203657 | orchestrator | 2025-06-02 13:12:46.204465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:46.205537 | orchestrator | Monday 02 June 2025 13:12:46 +0000 (0:00:00.409) 0:00:27.183 *********** 2025-06-02 13:12:46.610168 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85) 2025-06-02 13:12:46.610765 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85) 2025-06-02 13:12:46.611266 | orchestrator | 2025-06-02 13:12:46.611831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:12:46.613240 | orchestrator | Monday 02 June 2025 13:12:46 +0000 (0:00:00.410) 0:00:27.594 *********** 2025-06-02 13:12:46.925708 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:12:46.925916 | orchestrator | 2025-06-02 13:12:46.927062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:46.928339 | orchestrator | Monday 02 June 2025 13:12:46 +0000 (0:00:00.314) 0:00:27.908 *********** 2025-06-02 13:12:47.508234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 13:12:47.508504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 13:12:47.510614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 13:12:47.510932 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 13:12:47.512759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 13:12:47.513619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 13:12:47.514523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 13:12:47.515117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 13:12:47.515415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 13:12:47.515893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 13:12:47.516246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 13:12:47.516785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 13:12:47.517221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 13:12:47.517764 | orchestrator | 2025-06-02 13:12:47.518203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:47.518521 | orchestrator | Monday 02 June 2025 13:12:47 +0000 (0:00:00.583) 0:00:28.491 *********** 2025-06-02 13:12:47.709046 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:47.709608 | orchestrator | 2025-06-02 13:12:47.710484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:47.711379 | orchestrator | Monday 02 June 2025 13:12:47 +0000 (0:00:00.200) 0:00:28.692 *********** 2025-06-02 13:12:47.898330 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:47.899023 | orchestrator | 2025-06-02 13:12:47.899700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:47.900309 | orchestrator | Monday 02 June 2025 13:12:47 +0000 (0:00:00.188) 0:00:28.881 *********** 2025-06-02 13:12:48.093803 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:48.093896 | orchestrator | 2025-06-02 13:12:48.094660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:48.095588 | orchestrator | Monday 02 June 2025 13:12:48 +0000 (0:00:00.195) 0:00:29.076 *********** 2025-06-02 13:12:48.279856 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:48.280578 | orchestrator | 2025-06-02 13:12:48.281262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:48.281991 | orchestrator | Monday 02 June 2025 13:12:48 +0000 (0:00:00.186) 0:00:29.263 *********** 2025-06-02 13:12:48.464678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:48.465050 | orchestrator | 2025-06-02 13:12:48.466008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:48.466837 | orchestrator | Monday 02 June 2025 13:12:48 +0000 (0:00:00.184) 0:00:29.448 *********** 2025-06-02 13:12:48.662453 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:48.662586 | orchestrator | 2025-06-02 13:12:48.663126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:48.664499 | orchestrator | Monday 02 June 2025 13:12:48 +0000 (0:00:00.197) 0:00:29.645 *********** 2025-06-02 13:12:48.872811 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:48.873176 | orchestrator | 2025-06-02 13:12:48.873875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:48.874645 | orchestrator | Monday 02 June 2025 13:12:48 +0000 (0:00:00.210) 0:00:29.856 *********** 2025-06-02 13:12:49.068461 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:49.069589 | orchestrator | 2025-06-02 13:12:49.071079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:49.072039 | orchestrator | Monday 02 June 2025 13:12:49 +0000 (0:00:00.194) 0:00:30.050 *********** 2025-06-02 13:12:49.868631 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 13:12:49.870182 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 13:12:49.870880 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 13:12:49.872246 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 13:12:49.872836 | orchestrator | 2025-06-02 13:12:49.873998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:49.874899 | orchestrator | Monday 02 June 2025 13:12:49 +0000 (0:00:00.801) 0:00:30.852 *********** 2025-06-02 13:12:50.063126 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:50.064102 | orchestrator | 2025-06-02 13:12:50.065634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:50.066653 | orchestrator | Monday 02 June 2025 13:12:50 +0000 (0:00:00.193) 0:00:31.045 *********** 2025-06-02 13:12:50.253040 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:50.253204 | orchestrator | 2025-06-02 13:12:50.254854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:50.255697 | orchestrator | Monday 02 June 2025 13:12:50 +0000 (0:00:00.190) 0:00:31.236 *********** 2025-06-02 13:12:50.827851 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:50.828270 | orchestrator | 2025-06-02 13:12:50.828861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:12:50.829723 | orchestrator | Monday 02 June 2025 13:12:50 +0000 (0:00:00.575) 0:00:31.811 *********** 2025-06-02 13:12:51.030656 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:51.032018 | orchestrator | 2025-06-02 13:12:51.032053 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 13:12:51.032463 | orchestrator | Monday 02 June 2025 13:12:51 +0000 (0:00:00.199) 0:00:32.011 *********** 2025-06-02 13:12:51.160843 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:51.162079 | orchestrator | 2025-06-02 13:12:51.162669 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 13:12:51.163635 | orchestrator | Monday 02 June 2025 13:12:51 +0000 (0:00:00.133) 0:00:32.145 *********** 2025-06-02 13:12:51.345417 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d6dea29-b52d-558c-8900-475fd450038e'}}) 2025-06-02 13:12:51.346949 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '903578c2-c0cc-5204-b647-273ed346895e'}}) 2025-06-02 13:12:51.347432 | orchestrator | 2025-06-02 13:12:51.348427 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 13:12:51.349794 | orchestrator | Monday 02 June 2025 13:12:51 +0000 (0:00:00.183) 0:00:32.328 *********** 2025-06-02 13:12:53.223079 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'}) 2025-06-02 13:12:53.224029 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'}) 2025-06-02 13:12:53.224902 | orchestrator | 2025-06-02 13:12:53.226996 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 13:12:53.227671 | orchestrator | Monday 02 June 2025 13:12:53 +0000 (0:00:01.876) 0:00:34.204 *********** 2025-06-02 13:12:53.371327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:53.371499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:53.372048 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:53.372568 | orchestrator | 2025-06-02 13:12:53.373817 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 13:12:53.374901 | orchestrator | Monday 02 June 2025 13:12:53 +0000 (0:00:00.149) 0:00:34.354 *********** 2025-06-02 13:12:54.655142 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'}) 2025-06-02 13:12:54.656046 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'}) 2025-06-02 13:12:54.657699 | orchestrator | 2025-06-02 13:12:54.658487 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 13:12:54.659053 | orchestrator | Monday 02 June 2025 13:12:54 +0000 (0:00:01.282) 0:00:35.637 *********** 2025-06-02 13:12:54.800437 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:54.800534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:54.800959 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:54.802254 | orchestrator | 2025-06-02 13:12:54.803051 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 13:12:54.803706 | orchestrator | Monday 02 June 2025 13:12:54 +0000 (0:00:00.145) 0:00:35.783 *********** 2025-06-02 13:12:54.930961 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:54.932001 | orchestrator | 2025-06-02 13:12:54.933386 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 13:12:54.933726 | orchestrator | Monday 02 June 2025 13:12:54 +0000 (0:00:00.131) 0:00:35.914 *********** 2025-06-02 13:12:55.081298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:55.082794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:55.083909 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:55.084561 | orchestrator | 2025-06-02 13:12:55.085157 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 13:12:55.085908 | orchestrator | Monday 02 June 2025 13:12:55 +0000 (0:00:00.150) 0:00:36.064 *********** 2025-06-02 13:12:55.211566 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:55.212509 | orchestrator | 2025-06-02 13:12:55.213176 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 13:12:55.214157 | orchestrator | Monday 02 June 2025 13:12:55 +0000 (0:00:00.130) 0:00:36.195 *********** 2025-06-02 13:12:55.349968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:55.350326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:55.351180 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:55.352063 | orchestrator | 2025-06-02 13:12:55.353248 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 13:12:55.353449 | orchestrator | Monday 02 June 2025 13:12:55 +0000 (0:00:00.137) 0:00:36.332 *********** 2025-06-02 13:12:55.684423 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:55.684521 | orchestrator | 2025-06-02 13:12:55.685204 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 13:12:55.685967 | orchestrator | Monday 02 June 2025 13:12:55 +0000 (0:00:00.332) 0:00:36.665 *********** 2025-06-02 13:12:55.834739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:55.837101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:55.838244 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:55.839124 | orchestrator | 2025-06-02 13:12:55.839819 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 13:12:55.840308 | orchestrator | Monday 02 June 2025 13:12:55 +0000 (0:00:00.152) 0:00:36.817 *********** 2025-06-02 13:12:55.971111 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:12:55.972064 | orchestrator | 2025-06-02 13:12:55.973262 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 13:12:55.974102 | orchestrator | Monday 02 June 2025 13:12:55 +0000 (0:00:00.136) 0:00:36.954 *********** 2025-06-02 13:12:56.116968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:56.117556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:56.118579 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:56.119308 | orchestrator | 2025-06-02 13:12:56.120740 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 13:12:56.121104 | orchestrator | Monday 02 June 2025 13:12:56 +0000 (0:00:00.145) 0:00:37.100 *********** 2025-06-02 13:12:56.260677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:56.261789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:56.262943 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:56.264054 | orchestrator | 2025-06-02 13:12:56.264785 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 13:12:56.265516 | orchestrator | Monday 02 June 2025 13:12:56 +0000 (0:00:00.143) 0:00:37.243 *********** 2025-06-02 13:12:56.402265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:12:56.402556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:12:56.403724 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:56.404137 | orchestrator | 2025-06-02 13:12:56.404910 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 13:12:56.405417 | orchestrator | Monday 02 June 2025 13:12:56 +0000 (0:00:00.140) 0:00:37.384 *********** 2025-06-02 13:12:56.538775 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:56.539405 | orchestrator | 2025-06-02 13:12:56.540208 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 13:12:56.540926 | orchestrator | Monday 02 June 2025 13:12:56 +0000 (0:00:00.136) 0:00:37.521 *********** 2025-06-02 13:12:56.667319 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:56.667835 | orchestrator | 2025-06-02 13:12:56.669308 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 13:12:56.670217 | orchestrator | Monday 02 June 2025 13:12:56 +0000 (0:00:00.129) 0:00:37.650 *********** 2025-06-02 13:12:56.807188 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:56.808753 | orchestrator | 2025-06-02 13:12:56.809572 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 13:12:56.810447 | orchestrator | Monday 02 June 2025 13:12:56 +0000 (0:00:00.140) 0:00:37.790 *********** 2025-06-02 13:12:56.955673 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:12:56.956590 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 13:12:56.959009 | orchestrator | } 2025-06-02 13:12:56.959821 | orchestrator | 2025-06-02 13:12:56.960594 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 13:12:56.961484 | orchestrator | Monday 02 June 2025 13:12:56 +0000 (0:00:00.148) 0:00:37.939 *********** 2025-06-02 13:12:57.095758 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:12:57.096615 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 13:12:57.097908 | orchestrator | } 2025-06-02 13:12:57.099480 | orchestrator | 2025-06-02 13:12:57.100464 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 13:12:57.100933 | orchestrator | Monday 02 June 2025 13:12:57 +0000 (0:00:00.139) 0:00:38.078 *********** 2025-06-02 13:12:57.239444 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:12:57.239688 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 13:12:57.240980 | orchestrator | } 2025-06-02 13:12:57.242111 | orchestrator | 2025-06-02 13:12:57.242556 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 13:12:57.243043 | orchestrator | Monday 02 June 2025 13:12:57 +0000 (0:00:00.143) 0:00:38.221 *********** 2025-06-02 13:12:57.906594 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:12:57.906754 | orchestrator | 2025-06-02 13:12:57.908029 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 13:12:57.908784 | orchestrator | Monday 02 June 2025 13:12:57 +0000 (0:00:00.665) 0:00:38.887 *********** 2025-06-02 13:12:58.423564 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:12:58.424316 | orchestrator | 2025-06-02 13:12:58.425516 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 13:12:58.426113 | orchestrator | Monday 02 June 2025 13:12:58 +0000 (0:00:00.517) 0:00:39.405 *********** 2025-06-02 13:12:58.928906 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:12:58.929602 | orchestrator | 2025-06-02 13:12:58.930566 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 13:12:58.931435 | orchestrator | Monday 02 June 2025 13:12:58 +0000 (0:00:00.505) 0:00:39.911 *********** 2025-06-02 13:12:59.070432 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:12:59.070812 | orchestrator | 2025-06-02 13:12:59.072453 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 13:12:59.073634 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.141) 0:00:40.053 *********** 2025-06-02 13:12:59.181193 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:59.181290 | orchestrator | 2025-06-02 13:12:59.181605 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 13:12:59.181904 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.111) 0:00:40.164 *********** 2025-06-02 13:12:59.299873 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:59.303318 | orchestrator | 2025-06-02 13:12:59.303396 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 13:12:59.303918 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.118) 0:00:40.283 *********** 2025-06-02 13:12:59.435068 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:12:59.435251 | orchestrator |  "vgs_report": { 2025-06-02 13:12:59.435819 | orchestrator |  "vg": [] 2025-06-02 13:12:59.437977 | orchestrator |  } 2025-06-02 13:12:59.438005 | orchestrator | } 2025-06-02 13:12:59.438065 | orchestrator | 2025-06-02 13:12:59.438259 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 13:12:59.439058 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.134) 0:00:40.418 *********** 2025-06-02 13:12:59.569477 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:59.570605 | orchestrator | 2025-06-02 13:12:59.571287 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 13:12:59.572200 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.133) 0:00:40.551 *********** 2025-06-02 13:12:59.701782 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:59.702885 | orchestrator | 2025-06-02 13:12:59.704120 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 13:12:59.705060 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.132) 0:00:40.683 *********** 2025-06-02 13:12:59.824086 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:59.824259 | orchestrator | 2025-06-02 13:12:59.824485 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 13:12:59.825128 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.122) 0:00:40.806 *********** 2025-06-02 13:12:59.956615 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:12:59.957287 | orchestrator | 2025-06-02 13:12:59.957837 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 13:12:59.958628 | orchestrator | Monday 02 June 2025 13:12:59 +0000 (0:00:00.133) 0:00:40.940 *********** 2025-06-02 13:13:00.081054 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:00.082734 | orchestrator | 2025-06-02 13:13:00.082774 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 13:13:00.082789 | orchestrator | Monday 02 June 2025 13:13:00 +0000 (0:00:00.122) 0:00:41.063 *********** 2025-06-02 13:13:00.388317 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:00.388796 | orchestrator | 2025-06-02 13:13:00.389889 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 13:13:00.392013 | orchestrator | Monday 02 June 2025 13:13:00 +0000 (0:00:00.307) 0:00:41.370 *********** 2025-06-02 13:13:00.517854 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:00.517946 | orchestrator | 2025-06-02 13:13:00.517962 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 13:13:00.518403 | orchestrator | Monday 02 June 2025 13:13:00 +0000 (0:00:00.129) 0:00:41.500 *********** 2025-06-02 13:13:00.656656 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:00.656843 | orchestrator | 2025-06-02 13:13:00.657043 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 13:13:00.658167 | orchestrator | Monday 02 June 2025 13:13:00 +0000 (0:00:00.139) 0:00:41.640 *********** 2025-06-02 13:13:00.789175 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:00.789266 | orchestrator | 2025-06-02 13:13:00.789640 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 13:13:00.792718 | orchestrator | Monday 02 June 2025 13:13:00 +0000 (0:00:00.130) 0:00:41.770 *********** 2025-06-02 13:13:00.915207 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:00.915853 | orchestrator | 2025-06-02 13:13:00.916684 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 13:13:00.917504 | orchestrator | Monday 02 June 2025 13:13:00 +0000 (0:00:00.128) 0:00:41.899 *********** 2025-06-02 13:13:01.047469 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:01.048367 | orchestrator | 2025-06-02 13:13:01.049199 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 13:13:01.049708 | orchestrator | Monday 02 June 2025 13:13:01 +0000 (0:00:00.132) 0:00:42.031 *********** 2025-06-02 13:13:01.189437 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:01.190178 | orchestrator | 2025-06-02 13:13:01.190550 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 13:13:01.191287 | orchestrator | Monday 02 June 2025 13:13:01 +0000 (0:00:00.142) 0:00:42.173 *********** 2025-06-02 13:13:01.321609 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:01.322221 | orchestrator | 2025-06-02 13:13:01.322592 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 13:13:01.324275 | orchestrator | Monday 02 June 2025 13:13:01 +0000 (0:00:00.131) 0:00:42.304 *********** 2025-06-02 13:13:01.455428 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:01.458275 | orchestrator | 2025-06-02 13:13:01.460997 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 13:13:01.462151 | orchestrator | Monday 02 June 2025 13:13:01 +0000 (0:00:00.133) 0:00:42.437 *********** 2025-06-02 13:13:01.602145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:01.602608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:01.603285 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:01.604173 | orchestrator | 2025-06-02 13:13:01.606327 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 13:13:01.606497 | orchestrator | Monday 02 June 2025 13:13:01 +0000 (0:00:00.147) 0:00:42.585 *********** 2025-06-02 13:13:01.748073 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:01.750116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:01.751095 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:01.751518 | orchestrator | 2025-06-02 13:13:01.752428 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 13:13:01.752985 | orchestrator | Monday 02 June 2025 13:13:01 +0000 (0:00:00.145) 0:00:42.731 *********** 2025-06-02 13:13:01.886818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:01.886984 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:01.888022 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:01.889459 | orchestrator | 2025-06-02 13:13:01.889955 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 13:13:01.890513 | orchestrator | Monday 02 June 2025 13:13:01 +0000 (0:00:00.139) 0:00:42.870 *********** 2025-06-02 13:13:02.196689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:02.197096 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:02.198543 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:02.199281 | orchestrator | 2025-06-02 13:13:02.200231 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 13:13:02.200709 | orchestrator | Monday 02 June 2025 13:13:02 +0000 (0:00:00.309) 0:00:43.180 *********** 2025-06-02 13:13:02.359435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:02.359635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:02.359994 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:02.362980 | orchestrator | 2025-06-02 13:13:02.363086 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 13:13:02.363692 | orchestrator | Monday 02 June 2025 13:13:02 +0000 (0:00:00.161) 0:00:43.341 *********** 2025-06-02 13:13:02.515394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:02.515903 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:02.517314 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:02.518419 | orchestrator | 2025-06-02 13:13:02.519556 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 13:13:02.520539 | orchestrator | Monday 02 June 2025 13:13:02 +0000 (0:00:00.155) 0:00:43.496 *********** 2025-06-02 13:13:02.671130 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:02.671828 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:02.672576 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:02.673305 | orchestrator | 2025-06-02 13:13:02.674607 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 13:13:02.674711 | orchestrator | Monday 02 June 2025 13:13:02 +0000 (0:00:00.155) 0:00:43.652 *********** 2025-06-02 13:13:02.814314 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:02.814879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:02.815870 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:02.816951 | orchestrator | 2025-06-02 13:13:02.817648 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 13:13:02.818476 | orchestrator | Monday 02 June 2025 13:13:02 +0000 (0:00:00.145) 0:00:43.798 *********** 2025-06-02 13:13:03.314124 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:13:03.315584 | orchestrator | 2025-06-02 13:13:03.316795 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 13:13:03.317882 | orchestrator | Monday 02 June 2025 13:13:03 +0000 (0:00:00.498) 0:00:44.296 *********** 2025-06-02 13:13:03.820623 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:13:03.820848 | orchestrator | 2025-06-02 13:13:03.821616 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 13:13:03.821933 | orchestrator | Monday 02 June 2025 13:13:03 +0000 (0:00:00.506) 0:00:44.803 *********** 2025-06-02 13:13:03.962995 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:13:03.963610 | orchestrator | 2025-06-02 13:13:03.964593 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 13:13:03.969675 | orchestrator | Monday 02 June 2025 13:13:03 +0000 (0:00:00.143) 0:00:44.946 *********** 2025-06-02 13:13:04.139706 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'vg_name': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'}) 2025-06-02 13:13:04.139791 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'vg_name': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'}) 2025-06-02 13:13:04.139883 | orchestrator | 2025-06-02 13:13:04.140324 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 13:13:04.140853 | orchestrator | Monday 02 June 2025 13:13:04 +0000 (0:00:00.174) 0:00:45.121 *********** 2025-06-02 13:13:04.293968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:04.294530 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:04.295311 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:04.298262 | orchestrator | 2025-06-02 13:13:04.298912 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 13:13:04.300656 | orchestrator | Monday 02 June 2025 13:13:04 +0000 (0:00:00.154) 0:00:45.275 *********** 2025-06-02 13:13:04.464236 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:04.464990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:04.465954 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:04.466554 | orchestrator | 2025-06-02 13:13:04.467511 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 13:13:04.468856 | orchestrator | Monday 02 June 2025 13:13:04 +0000 (0:00:00.170) 0:00:45.446 *********** 2025-06-02 13:13:04.624024 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'})  2025-06-02 13:13:04.624515 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'})  2025-06-02 13:13:04.625659 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:04.626447 | orchestrator | 2025-06-02 13:13:04.628089 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 13:13:04.628116 | orchestrator | Monday 02 June 2025 13:13:04 +0000 (0:00:00.160) 0:00:45.607 *********** 2025-06-02 13:13:05.101446 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:13:05.101557 | orchestrator |  "lvm_report": { 2025-06-02 13:13:05.103509 | orchestrator |  "lv": [ 2025-06-02 13:13:05.103747 | orchestrator |  { 2025-06-02 13:13:05.104925 | orchestrator |  "lv_name": "osd-block-4d6dea29-b52d-558c-8900-475fd450038e", 2025-06-02 13:13:05.105820 | orchestrator |  "vg_name": "ceph-4d6dea29-b52d-558c-8900-475fd450038e" 2025-06-02 13:13:05.106188 | orchestrator |  }, 2025-06-02 13:13:05.106211 | orchestrator |  { 2025-06-02 13:13:05.107439 | orchestrator |  "lv_name": "osd-block-903578c2-c0cc-5204-b647-273ed346895e", 2025-06-02 13:13:05.107645 | orchestrator |  "vg_name": "ceph-903578c2-c0cc-5204-b647-273ed346895e" 2025-06-02 13:13:05.108070 | orchestrator |  } 2025-06-02 13:13:05.109387 | orchestrator |  ], 2025-06-02 13:13:05.109801 | orchestrator |  "pv": [ 2025-06-02 13:13:05.110946 | orchestrator |  { 2025-06-02 13:13:05.111054 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 13:13:05.111593 | orchestrator |  "vg_name": "ceph-4d6dea29-b52d-558c-8900-475fd450038e" 2025-06-02 13:13:05.112267 | orchestrator |  }, 2025-06-02 13:13:05.112293 | orchestrator |  { 2025-06-02 13:13:05.113305 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 13:13:05.113622 | orchestrator |  "vg_name": "ceph-903578c2-c0cc-5204-b647-273ed346895e" 2025-06-02 13:13:05.113653 | orchestrator |  } 2025-06-02 13:13:05.114133 | orchestrator |  ] 2025-06-02 13:13:05.114282 | orchestrator |  } 2025-06-02 13:13:05.115425 | orchestrator | } 2025-06-02 13:13:05.115494 | orchestrator | 2025-06-02 13:13:05.115677 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 13:13:05.115810 | orchestrator | 2025-06-02 13:13:05.116197 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:13:05.116424 | orchestrator | Monday 02 June 2025 13:13:05 +0000 (0:00:00.478) 0:00:46.085 *********** 2025-06-02 13:13:05.353473 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 13:13:05.353797 | orchestrator | 2025-06-02 13:13:05.354188 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 13:13:05.354972 | orchestrator | Monday 02 June 2025 13:13:05 +0000 (0:00:00.251) 0:00:46.337 *********** 2025-06-02 13:13:05.583813 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:05.584529 | orchestrator | 2025-06-02 13:13:05.585379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:05.585977 | orchestrator | Monday 02 June 2025 13:13:05 +0000 (0:00:00.229) 0:00:46.566 *********** 2025-06-02 13:13:06.000437 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 13:13:06.001417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 13:13:06.002300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 13:13:06.003430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 13:13:06.004864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 13:13:06.005852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 13:13:06.006533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 13:13:06.008241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 13:13:06.009245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 13:13:06.010000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 13:13:06.010813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 13:13:06.011240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 13:13:06.012105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 13:13:06.012781 | orchestrator | 2025-06-02 13:13:06.013619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:06.014448 | orchestrator | Monday 02 June 2025 13:13:05 +0000 (0:00:00.416) 0:00:46.983 *********** 2025-06-02 13:13:06.197243 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:06.198377 | orchestrator | 2025-06-02 13:13:06.199378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:06.200080 | orchestrator | Monday 02 June 2025 13:13:06 +0000 (0:00:00.197) 0:00:47.180 *********** 2025-06-02 13:13:06.403146 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:06.404801 | orchestrator | 2025-06-02 13:13:06.407118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:06.410057 | orchestrator | Monday 02 June 2025 13:13:06 +0000 (0:00:00.204) 0:00:47.385 *********** 2025-06-02 13:13:06.603842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:06.604773 | orchestrator | 2025-06-02 13:13:06.605303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:06.605888 | orchestrator | Monday 02 June 2025 13:13:06 +0000 (0:00:00.201) 0:00:47.586 *********** 2025-06-02 13:13:06.792860 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:06.793229 | orchestrator | 2025-06-02 13:13:06.793941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:06.794660 | orchestrator | Monday 02 June 2025 13:13:06 +0000 (0:00:00.189) 0:00:47.776 *********** 2025-06-02 13:13:06.970426 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:06.970624 | orchestrator | 2025-06-02 13:13:06.971359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:06.971825 | orchestrator | Monday 02 June 2025 13:13:06 +0000 (0:00:00.178) 0:00:47.954 *********** 2025-06-02 13:13:07.518614 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:07.518777 | orchestrator | 2025-06-02 13:13:07.519208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:07.519940 | orchestrator | Monday 02 June 2025 13:13:07 +0000 (0:00:00.547) 0:00:48.502 *********** 2025-06-02 13:13:07.717227 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:07.717515 | orchestrator | 2025-06-02 13:13:07.718458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:07.719925 | orchestrator | Monday 02 June 2025 13:13:07 +0000 (0:00:00.196) 0:00:48.699 *********** 2025-06-02 13:13:07.906695 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:07.907631 | orchestrator | 2025-06-02 13:13:07.908261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:07.908992 | orchestrator | Monday 02 June 2025 13:13:07 +0000 (0:00:00.191) 0:00:48.890 *********** 2025-06-02 13:13:08.300491 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e) 2025-06-02 13:13:08.300640 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e) 2025-06-02 13:13:08.301106 | orchestrator | 2025-06-02 13:13:08.302194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:08.303192 | orchestrator | Monday 02 June 2025 13:13:08 +0000 (0:00:00.392) 0:00:49.283 *********** 2025-06-02 13:13:08.717196 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de) 2025-06-02 13:13:08.717306 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de) 2025-06-02 13:13:08.718131 | orchestrator | 2025-06-02 13:13:08.719130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:08.720028 | orchestrator | Monday 02 June 2025 13:13:08 +0000 (0:00:00.415) 0:00:49.698 *********** 2025-06-02 13:13:09.130277 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855) 2025-06-02 13:13:09.130509 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855) 2025-06-02 13:13:09.131020 | orchestrator | 2025-06-02 13:13:09.131869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:09.132569 | orchestrator | Monday 02 June 2025 13:13:09 +0000 (0:00:00.414) 0:00:50.113 *********** 2025-06-02 13:13:09.538923 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da) 2025-06-02 13:13:09.539090 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da) 2025-06-02 13:13:09.539580 | orchestrator | 2025-06-02 13:13:09.540537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 13:13:09.541455 | orchestrator | Monday 02 June 2025 13:13:09 +0000 (0:00:00.408) 0:00:50.522 *********** 2025-06-02 13:13:09.855510 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 13:13:09.855986 | orchestrator | 2025-06-02 13:13:09.856740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:09.858452 | orchestrator | Monday 02 June 2025 13:13:09 +0000 (0:00:00.315) 0:00:50.838 *********** 2025-06-02 13:13:10.244573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 13:13:10.245341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 13:13:10.245747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 13:13:10.246811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 13:13:10.249019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 13:13:10.249040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 13:13:10.249825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 13:13:10.250652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 13:13:10.251466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 13:13:10.252139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 13:13:10.252567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 13:13:10.253219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 13:13:10.253980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 13:13:10.254458 | orchestrator | 2025-06-02 13:13:10.255196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:10.255530 | orchestrator | Monday 02 June 2025 13:13:10 +0000 (0:00:00.390) 0:00:51.228 *********** 2025-06-02 13:13:10.425032 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:10.425209 | orchestrator | 2025-06-02 13:13:10.425960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:10.426109 | orchestrator | Monday 02 June 2025 13:13:10 +0000 (0:00:00.180) 0:00:51.409 *********** 2025-06-02 13:13:10.613280 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:10.614718 | orchestrator | 2025-06-02 13:13:10.615527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:10.616813 | orchestrator | Monday 02 June 2025 13:13:10 +0000 (0:00:00.186) 0:00:51.595 *********** 2025-06-02 13:13:11.143178 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:11.143883 | orchestrator | 2025-06-02 13:13:11.144721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:11.145518 | orchestrator | Monday 02 June 2025 13:13:11 +0000 (0:00:00.530) 0:00:52.126 *********** 2025-06-02 13:13:11.332193 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:11.332299 | orchestrator | 2025-06-02 13:13:11.332986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:11.333667 | orchestrator | Monday 02 June 2025 13:13:11 +0000 (0:00:00.189) 0:00:52.316 *********** 2025-06-02 13:13:11.531874 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:11.532898 | orchestrator | 2025-06-02 13:13:11.534954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:11.536730 | orchestrator | Monday 02 June 2025 13:13:11 +0000 (0:00:00.198) 0:00:52.514 *********** 2025-06-02 13:13:11.726087 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:11.726540 | orchestrator | 2025-06-02 13:13:11.726563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:11.726575 | orchestrator | Monday 02 June 2025 13:13:11 +0000 (0:00:00.193) 0:00:52.708 *********** 2025-06-02 13:13:11.933461 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:11.933675 | orchestrator | 2025-06-02 13:13:11.934260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:11.934907 | orchestrator | Monday 02 June 2025 13:13:11 +0000 (0:00:00.208) 0:00:52.917 *********** 2025-06-02 13:13:12.118812 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:12.119657 | orchestrator | 2025-06-02 13:13:12.120626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:12.120991 | orchestrator | Monday 02 June 2025 13:13:12 +0000 (0:00:00.185) 0:00:53.102 *********** 2025-06-02 13:13:12.727488 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 13:13:12.727596 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 13:13:12.727678 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 13:13:12.728083 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 13:13:12.728450 | orchestrator | 2025-06-02 13:13:12.729103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:12.729376 | orchestrator | Monday 02 June 2025 13:13:12 +0000 (0:00:00.607) 0:00:53.709 *********** 2025-06-02 13:13:12.918285 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:12.918804 | orchestrator | 2025-06-02 13:13:12.919221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:12.920039 | orchestrator | Monday 02 June 2025 13:13:12 +0000 (0:00:00.192) 0:00:53.902 *********** 2025-06-02 13:13:13.105786 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:13.106670 | orchestrator | 2025-06-02 13:13:13.107068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:13.107702 | orchestrator | Monday 02 June 2025 13:13:13 +0000 (0:00:00.187) 0:00:54.089 *********** 2025-06-02 13:13:13.285302 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:13.286067 | orchestrator | 2025-06-02 13:13:13.286607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 13:13:13.287385 | orchestrator | Monday 02 June 2025 13:13:13 +0000 (0:00:00.179) 0:00:54.269 *********** 2025-06-02 13:13:13.462069 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:13.462308 | orchestrator | 2025-06-02 13:13:13.463160 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 13:13:13.466251 | orchestrator | Monday 02 June 2025 13:13:13 +0000 (0:00:00.175) 0:00:54.444 *********** 2025-06-02 13:13:13.780512 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:13.782744 | orchestrator | 2025-06-02 13:13:13.782779 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 13:13:13.783476 | orchestrator | Monday 02 June 2025 13:13:13 +0000 (0:00:00.316) 0:00:54.761 *********** 2025-06-02 13:13:13.960517 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e284bd18-e265-58a5-a2ab-ec21b03cc36c'}}) 2025-06-02 13:13:13.960935 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4e8c4e16-432b-566e-bc19-b5260bfeea4e'}}) 2025-06-02 13:13:13.961850 | orchestrator | 2025-06-02 13:13:13.962575 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 13:13:13.963416 | orchestrator | Monday 02 June 2025 13:13:13 +0000 (0:00:00.181) 0:00:54.943 *********** 2025-06-02 13:13:15.728264 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'}) 2025-06-02 13:13:15.729102 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'}) 2025-06-02 13:13:15.730163 | orchestrator | 2025-06-02 13:13:15.730862 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 13:13:15.732798 | orchestrator | Monday 02 June 2025 13:13:15 +0000 (0:00:01.766) 0:00:56.709 *********** 2025-06-02 13:13:15.873869 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:15.874815 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:15.875877 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:15.876502 | orchestrator | 2025-06-02 13:13:15.878544 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 13:13:15.878568 | orchestrator | Monday 02 June 2025 13:13:15 +0000 (0:00:00.147) 0:00:56.857 *********** 2025-06-02 13:13:17.134304 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'}) 2025-06-02 13:13:17.134561 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'}) 2025-06-02 13:13:17.135135 | orchestrator | 2025-06-02 13:13:17.136136 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 13:13:17.136861 | orchestrator | Monday 02 June 2025 13:13:17 +0000 (0:00:01.255) 0:00:58.113 *********** 2025-06-02 13:13:17.279403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:17.279955 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:17.280184 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:17.281240 | orchestrator | 2025-06-02 13:13:17.282855 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 13:13:17.282939 | orchestrator | Monday 02 June 2025 13:13:17 +0000 (0:00:00.149) 0:00:58.262 *********** 2025-06-02 13:13:17.412742 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:17.413206 | orchestrator | 2025-06-02 13:13:17.413836 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 13:13:17.414689 | orchestrator | Monday 02 June 2025 13:13:17 +0000 (0:00:00.133) 0:00:58.396 *********** 2025-06-02 13:13:17.563673 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:17.564640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:17.565761 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:17.566205 | orchestrator | 2025-06-02 13:13:17.566813 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 13:13:17.568595 | orchestrator | Monday 02 June 2025 13:13:17 +0000 (0:00:00.151) 0:00:58.547 *********** 2025-06-02 13:13:17.712782 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:17.713654 | orchestrator | 2025-06-02 13:13:17.714595 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 13:13:17.715489 | orchestrator | Monday 02 June 2025 13:13:17 +0000 (0:00:00.148) 0:00:58.696 *********** 2025-06-02 13:13:17.866474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:17.867187 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:17.867947 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:17.868473 | orchestrator | 2025-06-02 13:13:17.869736 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 13:13:17.869760 | orchestrator | Monday 02 June 2025 13:13:17 +0000 (0:00:00.152) 0:00:58.849 *********** 2025-06-02 13:13:18.015765 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:18.016288 | orchestrator | 2025-06-02 13:13:18.016926 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 13:13:18.017675 | orchestrator | Monday 02 June 2025 13:13:18 +0000 (0:00:00.150) 0:00:58.999 *********** 2025-06-02 13:13:18.178989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:18.179209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:18.180121 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:18.180820 | orchestrator | 2025-06-02 13:13:18.181611 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 13:13:18.182124 | orchestrator | Monday 02 June 2025 13:13:18 +0000 (0:00:00.162) 0:00:59.162 *********** 2025-06-02 13:13:18.315165 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:18.315421 | orchestrator | 2025-06-02 13:13:18.316210 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 13:13:18.317059 | orchestrator | Monday 02 June 2025 13:13:18 +0000 (0:00:00.136) 0:00:59.298 *********** 2025-06-02 13:13:18.628240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:18.628602 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:18.629439 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:18.630759 | orchestrator | 2025-06-02 13:13:18.631920 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 13:13:18.632707 | orchestrator | Monday 02 June 2025 13:13:18 +0000 (0:00:00.312) 0:00:59.611 *********** 2025-06-02 13:13:18.780775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:18.780896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:18.781682 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:18.782782 | orchestrator | 2025-06-02 13:13:18.783541 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 13:13:18.784142 | orchestrator | Monday 02 June 2025 13:13:18 +0000 (0:00:00.152) 0:00:59.763 *********** 2025-06-02 13:13:18.930653 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:18.931255 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:18.932032 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:18.932721 | orchestrator | 2025-06-02 13:13:18.933738 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 13:13:18.934542 | orchestrator | Monday 02 June 2025 13:13:18 +0000 (0:00:00.148) 0:00:59.912 *********** 2025-06-02 13:13:19.059569 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:19.059828 | orchestrator | 2025-06-02 13:13:19.060457 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 13:13:19.061737 | orchestrator | Monday 02 June 2025 13:13:19 +0000 (0:00:00.128) 0:01:00.041 *********** 2025-06-02 13:13:19.189760 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:19.190667 | orchestrator | 2025-06-02 13:13:19.192002 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 13:13:19.192834 | orchestrator | Monday 02 June 2025 13:13:19 +0000 (0:00:00.131) 0:01:00.172 *********** 2025-06-02 13:13:19.323916 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:19.324577 | orchestrator | 2025-06-02 13:13:19.325795 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 13:13:19.326830 | orchestrator | Monday 02 June 2025 13:13:19 +0000 (0:00:00.133) 0:01:00.306 *********** 2025-06-02 13:13:19.461296 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:13:19.461858 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 13:13:19.464090 | orchestrator | } 2025-06-02 13:13:19.464601 | orchestrator | 2025-06-02 13:13:19.465605 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 13:13:19.466700 | orchestrator | Monday 02 June 2025 13:13:19 +0000 (0:00:00.135) 0:01:00.442 *********** 2025-06-02 13:13:19.589886 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:13:19.590795 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 13:13:19.592045 | orchestrator | } 2025-06-02 13:13:19.593016 | orchestrator | 2025-06-02 13:13:19.593857 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 13:13:19.594589 | orchestrator | Monday 02 June 2025 13:13:19 +0000 (0:00:00.130) 0:01:00.573 *********** 2025-06-02 13:13:19.718397 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:13:19.718493 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 13:13:19.718854 | orchestrator | } 2025-06-02 13:13:19.719678 | orchestrator | 2025-06-02 13:13:19.720529 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 13:13:19.721401 | orchestrator | Monday 02 June 2025 13:13:19 +0000 (0:00:00.127) 0:01:00.700 *********** 2025-06-02 13:13:20.202370 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:20.204782 | orchestrator | 2025-06-02 13:13:20.205849 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 13:13:20.205901 | orchestrator | Monday 02 June 2025 13:13:20 +0000 (0:00:00.483) 0:01:01.183 *********** 2025-06-02 13:13:20.710982 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:20.713218 | orchestrator | 2025-06-02 13:13:20.718113 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 13:13:20.718154 | orchestrator | Monday 02 June 2025 13:13:20 +0000 (0:00:00.509) 0:01:01.693 *********** 2025-06-02 13:13:21.209725 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:21.209888 | orchestrator | 2025-06-02 13:13:21.211422 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 13:13:21.212519 | orchestrator | Monday 02 June 2025 13:13:21 +0000 (0:00:00.497) 0:01:02.191 *********** 2025-06-02 13:13:21.526404 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:21.526572 | orchestrator | 2025-06-02 13:13:21.527990 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 13:13:21.528803 | orchestrator | Monday 02 June 2025 13:13:21 +0000 (0:00:00.318) 0:01:02.509 *********** 2025-06-02 13:13:21.636655 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:21.636815 | orchestrator | 2025-06-02 13:13:21.637933 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 13:13:21.639150 | orchestrator | Monday 02 June 2025 13:13:21 +0000 (0:00:00.109) 0:01:02.619 *********** 2025-06-02 13:13:21.743641 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:21.744669 | orchestrator | 2025-06-02 13:13:21.744950 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 13:13:21.746772 | orchestrator | Monday 02 June 2025 13:13:21 +0000 (0:00:00.107) 0:01:02.727 *********** 2025-06-02 13:13:21.883421 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:13:21.884542 | orchestrator |  "vgs_report": { 2025-06-02 13:13:21.885658 | orchestrator |  "vg": [] 2025-06-02 13:13:21.886830 | orchestrator |  } 2025-06-02 13:13:21.887686 | orchestrator | } 2025-06-02 13:13:21.889105 | orchestrator | 2025-06-02 13:13:21.890276 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 13:13:21.891267 | orchestrator | Monday 02 June 2025 13:13:21 +0000 (0:00:00.140) 0:01:02.867 *********** 2025-06-02 13:13:22.020411 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.022152 | orchestrator | 2025-06-02 13:13:22.022407 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 13:13:22.023186 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.133) 0:01:03.001 *********** 2025-06-02 13:13:22.158803 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.158980 | orchestrator | 2025-06-02 13:13:22.159258 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 13:13:22.160055 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.140) 0:01:03.141 *********** 2025-06-02 13:13:22.285525 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.286128 | orchestrator | 2025-06-02 13:13:22.287183 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 13:13:22.288183 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.127) 0:01:03.269 *********** 2025-06-02 13:13:22.424898 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.425462 | orchestrator | 2025-06-02 13:13:22.426230 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 13:13:22.427350 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.134) 0:01:03.403 *********** 2025-06-02 13:13:22.553954 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.554516 | orchestrator | 2025-06-02 13:13:22.555495 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 13:13:22.556427 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.133) 0:01:03.537 *********** 2025-06-02 13:13:22.676669 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.678077 | orchestrator | 2025-06-02 13:13:22.679419 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 13:13:22.681085 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.122) 0:01:03.660 *********** 2025-06-02 13:13:22.796896 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.797064 | orchestrator | 2025-06-02 13:13:22.797968 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 13:13:22.799026 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.119) 0:01:03.779 *********** 2025-06-02 13:13:22.934392 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:22.935248 | orchestrator | 2025-06-02 13:13:22.936365 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 13:13:22.937722 | orchestrator | Monday 02 June 2025 13:13:22 +0000 (0:00:00.137) 0:01:03.916 *********** 2025-06-02 13:13:23.246170 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:23.247096 | orchestrator | 2025-06-02 13:13:23.247423 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 13:13:23.248225 | orchestrator | Monday 02 June 2025 13:13:23 +0000 (0:00:00.313) 0:01:04.229 *********** 2025-06-02 13:13:23.385699 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:23.385827 | orchestrator | 2025-06-02 13:13:23.386364 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 13:13:23.387093 | orchestrator | Monday 02 June 2025 13:13:23 +0000 (0:00:00.139) 0:01:04.369 *********** 2025-06-02 13:13:23.514681 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:23.515531 | orchestrator | 2025-06-02 13:13:23.516378 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 13:13:23.517457 | orchestrator | Monday 02 June 2025 13:13:23 +0000 (0:00:00.129) 0:01:04.498 *********** 2025-06-02 13:13:23.649093 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:23.649424 | orchestrator | 2025-06-02 13:13:23.650217 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 13:13:23.650950 | orchestrator | Monday 02 June 2025 13:13:23 +0000 (0:00:00.132) 0:01:04.630 *********** 2025-06-02 13:13:23.781710 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:23.782429 | orchestrator | 2025-06-02 13:13:23.783214 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 13:13:23.784142 | orchestrator | Monday 02 June 2025 13:13:23 +0000 (0:00:00.134) 0:01:04.765 *********** 2025-06-02 13:13:23.920419 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:23.920918 | orchestrator | 2025-06-02 13:13:23.921625 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 13:13:23.922240 | orchestrator | Monday 02 June 2025 13:13:23 +0000 (0:00:00.138) 0:01:04.903 *********** 2025-06-02 13:13:24.077721 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:24.080059 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:24.080576 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:24.081242 | orchestrator | 2025-06-02 13:13:24.081720 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 13:13:24.082297 | orchestrator | Monday 02 June 2025 13:13:24 +0000 (0:00:00.155) 0:01:05.059 *********** 2025-06-02 13:13:24.220645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:24.221724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:24.221860 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:24.222831 | orchestrator | 2025-06-02 13:13:24.224779 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 13:13:24.224809 | orchestrator | Monday 02 June 2025 13:13:24 +0000 (0:00:00.144) 0:01:05.203 *********** 2025-06-02 13:13:24.372535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:24.373477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:24.374604 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:24.377836 | orchestrator | 2025-06-02 13:13:24.377869 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 13:13:24.378468 | orchestrator | Monday 02 June 2025 13:13:24 +0000 (0:00:00.152) 0:01:05.356 *********** 2025-06-02 13:13:24.505145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:24.506277 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:24.508211 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:24.508780 | orchestrator | 2025-06-02 13:13:24.509991 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 13:13:24.511127 | orchestrator | Monday 02 June 2025 13:13:24 +0000 (0:00:00.132) 0:01:05.488 *********** 2025-06-02 13:13:24.655588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:24.658268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:24.658298 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:24.658406 | orchestrator | 2025-06-02 13:13:24.659283 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 13:13:24.659835 | orchestrator | Monday 02 June 2025 13:13:24 +0000 (0:00:00.149) 0:01:05.638 *********** 2025-06-02 13:13:24.804053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:24.804150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:24.804164 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:24.804252 | orchestrator | 2025-06-02 13:13:24.805335 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 13:13:24.805778 | orchestrator | Monday 02 June 2025 13:13:24 +0000 (0:00:00.146) 0:01:05.784 *********** 2025-06-02 13:13:25.132349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:25.132471 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:25.132487 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:25.132575 | orchestrator | 2025-06-02 13:13:25.133932 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 13:13:25.134514 | orchestrator | Monday 02 June 2025 13:13:25 +0000 (0:00:00.327) 0:01:06.112 *********** 2025-06-02 13:13:25.280379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:25.280477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:25.280491 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:25.281545 | orchestrator | 2025-06-02 13:13:25.282072 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 13:13:25.282654 | orchestrator | Monday 02 June 2025 13:13:25 +0000 (0:00:00.148) 0:01:06.260 *********** 2025-06-02 13:13:25.778618 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:25.778769 | orchestrator | 2025-06-02 13:13:25.779180 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 13:13:25.780426 | orchestrator | Monday 02 June 2025 13:13:25 +0000 (0:00:00.501) 0:01:06.761 *********** 2025-06-02 13:13:26.288599 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:26.288721 | orchestrator | 2025-06-02 13:13:26.288800 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 13:13:26.289545 | orchestrator | Monday 02 June 2025 13:13:26 +0000 (0:00:00.504) 0:01:07.266 *********** 2025-06-02 13:13:26.431156 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:26.432230 | orchestrator | 2025-06-02 13:13:26.433123 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 13:13:26.433968 | orchestrator | Monday 02 June 2025 13:13:26 +0000 (0:00:00.145) 0:01:07.411 *********** 2025-06-02 13:13:26.603777 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'vg_name': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'}) 2025-06-02 13:13:26.604090 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'vg_name': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'}) 2025-06-02 13:13:26.604851 | orchestrator | 2025-06-02 13:13:26.605729 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 13:13:26.606847 | orchestrator | Monday 02 June 2025 13:13:26 +0000 (0:00:00.173) 0:01:07.585 *********** 2025-06-02 13:13:26.747730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:26.748391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:26.749407 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:26.750662 | orchestrator | 2025-06-02 13:13:26.751163 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 13:13:26.752154 | orchestrator | Monday 02 June 2025 13:13:26 +0000 (0:00:00.145) 0:01:07.731 *********** 2025-06-02 13:13:26.897752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:26.897944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:26.899083 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:26.899959 | orchestrator | 2025-06-02 13:13:26.901405 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 13:13:26.902818 | orchestrator | Monday 02 June 2025 13:13:26 +0000 (0:00:00.150) 0:01:07.881 *********** 2025-06-02 13:13:27.037704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'})  2025-06-02 13:13:27.037894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'})  2025-06-02 13:13:27.039017 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:27.040787 | orchestrator | 2025-06-02 13:13:27.041506 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 13:13:27.042146 | orchestrator | Monday 02 June 2025 13:13:27 +0000 (0:00:00.138) 0:01:08.020 *********** 2025-06-02 13:13:27.179567 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:13:27.180024 | orchestrator |  "lvm_report": { 2025-06-02 13:13:27.180775 | orchestrator |  "lv": [ 2025-06-02 13:13:27.181604 | orchestrator |  { 2025-06-02 13:13:27.183051 | orchestrator |  "lv_name": "osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e", 2025-06-02 13:13:27.183256 | orchestrator |  "vg_name": "ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e" 2025-06-02 13:13:27.184208 | orchestrator |  }, 2025-06-02 13:13:27.185513 | orchestrator |  { 2025-06-02 13:13:27.185679 | orchestrator |  "lv_name": "osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c", 2025-06-02 13:13:27.185965 | orchestrator |  "vg_name": "ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c" 2025-06-02 13:13:27.187590 | orchestrator |  } 2025-06-02 13:13:27.187932 | orchestrator |  ], 2025-06-02 13:13:27.188979 | orchestrator |  "pv": [ 2025-06-02 13:13:27.189757 | orchestrator |  { 2025-06-02 13:13:27.190378 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 13:13:27.191874 | orchestrator |  "vg_name": "ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c" 2025-06-02 13:13:27.192418 | orchestrator |  }, 2025-06-02 13:13:27.192676 | orchestrator |  { 2025-06-02 13:13:27.193529 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 13:13:27.196475 | orchestrator |  "vg_name": "ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e" 2025-06-02 13:13:27.196500 | orchestrator |  } 2025-06-02 13:13:27.196511 | orchestrator |  ] 2025-06-02 13:13:27.196522 | orchestrator |  } 2025-06-02 13:13:27.196987 | orchestrator | } 2025-06-02 13:13:27.197798 | orchestrator | 2025-06-02 13:13:27.198219 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:13:27.199148 | orchestrator | 2025-06-02 13:13:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:13:27.199350 | orchestrator | 2025-06-02 13:13:27 | INFO  | Please wait and do not abort execution. 2025-06-02 13:13:27.199726 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 13:13:27.200155 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 13:13:27.200844 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 13:13:27.201264 | orchestrator | 2025-06-02 13:13:27.201893 | orchestrator | 2025-06-02 13:13:27.202438 | orchestrator | 2025-06-02 13:13:27.202759 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:13:27.203217 | orchestrator | Monday 02 June 2025 13:13:27 +0000 (0:00:00.142) 0:01:08.163 *********** 2025-06-02 13:13:27.203905 | orchestrator | =============================================================================== 2025-06-02 13:13:27.204517 | orchestrator | Create block VGs -------------------------------------------------------- 5.63s 2025-06-02 13:13:27.205242 | orchestrator | Create block LVs -------------------------------------------------------- 3.94s 2025-06-02 13:13:27.205474 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2025-06-02 13:13:27.205982 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.51s 2025-06-02 13:13:27.206518 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-06-02 13:13:27.207375 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.50s 2025-06-02 13:13:27.208119 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.50s 2025-06-02 13:13:27.208822 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2025-06-02 13:13:27.209517 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-06-02 13:13:27.210088 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-06-02 13:13:27.210907 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2025-06-02 13:13:27.212277 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-06-02 13:13:27.212917 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-06-02 13:13:27.213565 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-06-02 13:13:27.214369 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2025-06-02 13:13:27.214855 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.67s 2025-06-02 13:13:27.215329 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.64s 2025-06-02 13:13:27.216280 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.62s 2025-06-02 13:13:27.216833 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.62s 2025-06-02 13:13:27.217552 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.61s 2025-06-02 13:13:29.397826 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:13:29.397937 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:13:29.397952 | orchestrator | Registering Redlock._release_script 2025-06-02 13:13:29.453869 | orchestrator | 2025-06-02 13:13:29 | INFO  | Task da7f66a4-b3f6-4dfd-909f-905c5c603a79 (facts) was prepared for execution. 2025-06-02 13:13:29.453940 | orchestrator | 2025-06-02 13:13:29 | INFO  | It takes a moment until task da7f66a4-b3f6-4dfd-909f-905c5c603a79 (facts) has been started and output is visible here. 2025-06-02 13:13:33.430716 | orchestrator | 2025-06-02 13:13:33.431451 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 13:13:33.433079 | orchestrator | 2025-06-02 13:13:33.435867 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 13:13:33.435897 | orchestrator | Monday 02 June 2025 13:13:33 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-06-02 13:13:34.433384 | orchestrator | ok: [testbed-manager] 2025-06-02 13:13:34.434611 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:13:34.437732 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:13:34.437759 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:13:34.437771 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:13:34.437782 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:13:34.439670 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:34.440506 | orchestrator | 2025-06-02 13:13:34.445256 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 13:13:34.445325 | orchestrator | Monday 02 June 2025 13:13:34 +0000 (0:00:01.002) 0:00:01.255 *********** 2025-06-02 13:13:34.592818 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:13:34.663416 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:13:34.741547 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:13:34.811909 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:13:34.893471 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:13:35.443204 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:35.443400 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:35.444845 | orchestrator | 2025-06-02 13:13:35.446246 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 13:13:35.447115 | orchestrator | 2025-06-02 13:13:35.447881 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 13:13:35.448689 | orchestrator | Monday 02 June 2025 13:13:35 +0000 (0:00:01.012) 0:00:02.268 *********** 2025-06-02 13:13:40.042127 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:13:40.042450 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:13:40.042784 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:13:40.044390 | orchestrator | ok: [testbed-manager] 2025-06-02 13:13:40.045580 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:13:40.046381 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:13:40.046854 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:13:40.047930 | orchestrator | 2025-06-02 13:13:40.048549 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 13:13:40.049244 | orchestrator | 2025-06-02 13:13:40.050085 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 13:13:40.050582 | orchestrator | Monday 02 June 2025 13:13:40 +0000 (0:00:04.598) 0:00:06.866 *********** 2025-06-02 13:13:40.190776 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:13:40.263508 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:13:40.349949 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:13:40.423510 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:13:40.497974 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:13:40.543456 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:13:40.543957 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:13:40.544854 | orchestrator | 2025-06-02 13:13:40.545577 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:13:40.546514 | orchestrator | 2025-06-02 13:13:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:13:40.546562 | orchestrator | 2025-06-02 13:13:40 | INFO  | Please wait and do not abort execution. 2025-06-02 13:13:40.547442 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:13:40.548205 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:13:40.548905 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:13:40.549601 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:13:40.549933 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:13:40.550594 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:13:40.551088 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:13:40.551834 | orchestrator | 2025-06-02 13:13:40.552339 | orchestrator | 2025-06-02 13:13:40.552776 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:13:40.553277 | orchestrator | Monday 02 June 2025 13:13:40 +0000 (0:00:00.501) 0:00:07.368 *********** 2025-06-02 13:13:40.553631 | orchestrator | =============================================================================== 2025-06-02 13:13:40.553963 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.60s 2025-06-02 13:13:40.554524 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.01s 2025-06-02 13:13:40.554761 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2025-06-02 13:13:40.555198 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-06-02 13:13:41.098516 | orchestrator | 2025-06-02 13:13:41.101749 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jun 2 13:13:41 UTC 2025 2025-06-02 13:13:41.101794 | orchestrator | 2025-06-02 13:13:42.738902 | orchestrator | 2025-06-02 13:13:42 | INFO  | Collection nutshell is prepared for execution 2025-06-02 13:13:42.739007 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [0] - dotfiles 2025-06-02 13:13:42.743912 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:13:42.743968 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:13:42.743980 | orchestrator | Registering Redlock._release_script 2025-06-02 13:13:42.748651 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [0] - homer 2025-06-02 13:13:42.748690 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [0] - netdata 2025-06-02 13:13:42.748702 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [0] - openstackclient 2025-06-02 13:13:42.748712 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [0] - phpmyadmin 2025-06-02 13:13:42.748969 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [0] - common 2025-06-02 13:13:42.750360 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [1] -- loadbalancer 2025-06-02 13:13:42.750390 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [2] --- opensearch 2025-06-02 13:13:42.750807 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [2] --- mariadb-ng 2025-06-02 13:13:42.750908 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [3] ---- horizon 2025-06-02 13:13:42.750934 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [3] ---- keystone 2025-06-02 13:13:42.750955 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [4] ----- neutron 2025-06-02 13:13:42.751069 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [5] ------ wait-for-nova 2025-06-02 13:13:42.751096 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [5] ------ octavia 2025-06-02 13:13:42.751201 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [4] ----- barbican 2025-06-02 13:13:42.751220 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [4] ----- designate 2025-06-02 13:13:42.751231 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [4] ----- ironic 2025-06-02 13:13:42.751542 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [4] ----- placement 2025-06-02 13:13:42.751568 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [4] ----- magnum 2025-06-02 13:13:42.751798 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [1] -- openvswitch 2025-06-02 13:13:42.752523 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [2] --- ovn 2025-06-02 13:13:42.752547 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [1] -- memcached 2025-06-02 13:13:42.752559 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [1] -- redis 2025-06-02 13:13:42.752572 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [1] -- rabbitmq-ng 2025-06-02 13:13:42.752585 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [0] - kubernetes 2025-06-02 13:13:42.754014 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [1] -- kubeconfig 2025-06-02 13:13:42.754096 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [1] -- copy-kubeconfig 2025-06-02 13:13:42.754161 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [0] - ceph 2025-06-02 13:13:42.755485 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [1] -- ceph-pools 2025-06-02 13:13:42.755514 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [2] --- copy-ceph-keys 2025-06-02 13:13:42.755851 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [3] ---- cephclient 2025-06-02 13:13:42.755875 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-02 13:13:42.755886 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [4] ----- wait-for-keystone 2025-06-02 13:13:42.755896 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-02 13:13:42.756121 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [5] ------ glance 2025-06-02 13:13:42.756142 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [5] ------ cinder 2025-06-02 13:13:42.756152 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [5] ------ nova 2025-06-02 13:13:42.756405 | orchestrator | 2025-06-02 13:13:42 | INFO  | A [4] ----- prometheus 2025-06-02 13:13:42.756427 | orchestrator | 2025-06-02 13:13:42 | INFO  | D [5] ------ grafana 2025-06-02 13:13:42.928543 | orchestrator | 2025-06-02 13:13:42 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-02 13:13:42.928629 | orchestrator | 2025-06-02 13:13:42 | INFO  | Tasks are running in the background 2025-06-02 13:13:45.280465 | orchestrator | 2025-06-02 13:13:45 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-02 13:13:47.412961 | orchestrator | 2025-06-02 13:13:47 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:13:47.413481 | orchestrator | 2025-06-02 13:13:47 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:13:47.414055 | orchestrator | 2025-06-02 13:13:47 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:13:47.415826 | orchestrator | 2025-06-02 13:13:47 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:13:47.419064 | orchestrator | 2025-06-02 13:13:47 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:13:47.420489 | orchestrator | 2025-06-02 13:13:47 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:13:47.420970 | orchestrator | 2025-06-02 13:13:47 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:13:47.421055 | orchestrator | 2025-06-02 13:13:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:13:50.461622 | orchestrator | 2025-06-02 13:13:50 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:13:50.464686 | orchestrator | 2025-06-02 13:13:50 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:13:50.464750 | orchestrator | 2025-06-02 13:13:50 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:13:50.464834 | orchestrator | 2025-06-02 13:13:50 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:13:50.465364 | orchestrator | 2025-06-02 13:13:50 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:13:50.465993 | orchestrator | 2025-06-02 13:13:50 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:13:50.469934 | orchestrator | 2025-06-02 13:13:50 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:13:50.470439 | orchestrator | 2025-06-02 13:13:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:13:53.502494 | orchestrator | 2025-06-02 13:13:53 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:13:53.502645 | orchestrator | 2025-06-02 13:13:53 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:13:53.502967 | orchestrator | 2025-06-02 13:13:53 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:13:53.503354 | orchestrator | 2025-06-02 13:13:53 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:13:53.503824 | orchestrator | 2025-06-02 13:13:53 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:13:53.506128 | orchestrator | 2025-06-02 13:13:53 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:13:53.506481 | orchestrator | 2025-06-02 13:13:53 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:13:53.506502 | orchestrator | 2025-06-02 13:13:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:13:56.558833 | orchestrator | 2025-06-02 13:13:56 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:13:56.559074 | orchestrator | 2025-06-02 13:13:56 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:13:56.560222 | orchestrator | 2025-06-02 13:13:56 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:13:56.560872 | orchestrator | 2025-06-02 13:13:56 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:13:56.562518 | orchestrator | 2025-06-02 13:13:56 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:13:56.566432 | orchestrator | 2025-06-02 13:13:56 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:13:56.566459 | orchestrator | 2025-06-02 13:13:56 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:13:56.566471 | orchestrator | 2025-06-02 13:13:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:13:59.622776 | orchestrator | 2025-06-02 13:13:59 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:13:59.623438 | orchestrator | 2025-06-02 13:13:59 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:13:59.625214 | orchestrator | 2025-06-02 13:13:59 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:13:59.625254 | orchestrator | 2025-06-02 13:13:59 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:13:59.626070 | orchestrator | 2025-06-02 13:13:59 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:13:59.626415 | orchestrator | 2025-06-02 13:13:59 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:13:59.627205 | orchestrator | 2025-06-02 13:13:59 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:13:59.627299 | orchestrator | 2025-06-02 13:13:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:02.705526 | orchestrator | 2025-06-02 13:14:02 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:02.708097 | orchestrator | 2025-06-02 13:14:02 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:02.708521 | orchestrator | 2025-06-02 13:14:02 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:02.711320 | orchestrator | 2025-06-02 13:14:02 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:14:02.714147 | orchestrator | 2025-06-02 13:14:02 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:02.716287 | orchestrator | 2025-06-02 13:14:02 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:02.717566 | orchestrator | 2025-06-02 13:14:02 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:02.717594 | orchestrator | 2025-06-02 13:14:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:05.804757 | orchestrator | 2025-06-02 13:14:05 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:05.810561 | orchestrator | 2025-06-02 13:14:05 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:05.810595 | orchestrator | 2025-06-02 13:14:05 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:05.810608 | orchestrator | 2025-06-02 13:14:05 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:14:05.817423 | orchestrator | 2025-06-02 13:14:05 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:05.817447 | orchestrator | 2025-06-02 13:14:05 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:05.817846 | orchestrator | 2025-06-02 13:14:05 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:05.817865 | orchestrator | 2025-06-02 13:14:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:08.872948 | orchestrator | 2025-06-02 13:14:08 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:08.873072 | orchestrator | 2025-06-02 13:14:08 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:08.873088 | orchestrator | 2025-06-02 13:14:08 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:08.873100 | orchestrator | 2025-06-02 13:14:08 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state STARTED 2025-06-02 13:14:08.873181 | orchestrator | 2025-06-02 13:14:08 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:08.873658 | orchestrator | 2025-06-02 13:14:08 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:08.876642 | orchestrator | 2025-06-02 13:14:08 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:08.876662 | orchestrator | 2025-06-02 13:14:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:11.925596 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:11.925890 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:11.926883 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:11.927975 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task 8e1e1e43-bb9f-47f7-8432-65b3597ddbae is in state SUCCESS 2025-06-02 13:14:11.928829 | orchestrator | 2025-06-02 13:14:11.928862 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-02 13:14:11.928874 | orchestrator | 2025-06-02 13:14:11.928885 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-02 13:14:11.928896 | orchestrator | Monday 02 June 2025 13:13:54 +0000 (0:00:00.642) 0:00:00.642 *********** 2025-06-02 13:14:11.928907 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:11.928918 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:14:11.928928 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:14:11.928939 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:14:11.928949 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:14:11.928960 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:14:11.928970 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:14:11.928980 | orchestrator | 2025-06-02 13:14:11.928991 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-02 13:14:11.929001 | orchestrator | Monday 02 June 2025 13:13:58 +0000 (0:00:04.367) 0:00:05.009 *********** 2025-06-02 13:14:11.929012 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 13:14:11.929023 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 13:14:11.929033 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 13:14:11.929044 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 13:14:11.929054 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 13:14:11.929064 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 13:14:11.929076 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 13:14:11.929087 | orchestrator | 2025-06-02 13:14:11.929098 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-02 13:14:11.929109 | orchestrator | Monday 02 June 2025 13:14:00 +0000 (0:00:01.706) 0:00:06.716 *********** 2025-06-02 13:14:11.929128 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 13:13:59.497589', 'end': '2025-06-02 13:13:59.502035', 'delta': '0:00:00.004446', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 13:14:11.929163 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 13:13:59.568422', 'end': '2025-06-02 13:13:59.574237', 'delta': '0:00:00.005815', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 13:14:11.929176 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 13:13:59.535742', 'end': '2025-06-02 13:13:59.541110', 'delta': '0:00:00.005368', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 13:14:11.929207 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 13:13:59.788097', 'end': '2025-06-02 13:13:59.799435', 'delta': '0:00:00.011338', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 13:14:11.929219 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 13:13:59.957235', 'end': '2025-06-02 13:13:59.966371', 'delta': '0:00:00.009136', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 13:14:11.929235 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 13:14:00.024695', 'end': '2025-06-02 13:14:00.033221', 'delta': '0:00:00.008526', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 13:14:11.929283 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 13:14:00.123407', 'end': '2025-06-02 13:14:00.131925', 'delta': '0:00:00.008518', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 13:14:11.929297 | orchestrator | 2025-06-02 13:14:11.929308 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-02 13:14:11.929319 | orchestrator | Monday 02 June 2025 13:14:02 +0000 (0:00:02.705) 0:00:09.421 *********** 2025-06-02 13:14:11.929330 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 13:14:11.929340 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 13:14:11.929351 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 13:14:11.929362 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 13:14:11.929372 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 13:14:11.929383 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 13:14:11.929393 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 13:14:11.929404 | orchestrator | 2025-06-02 13:14:11.929415 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-02 13:14:11.929426 | orchestrator | Monday 02 June 2025 13:14:04 +0000 (0:00:01.492) 0:00:10.913 *********** 2025-06-02 13:14:11.929437 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-02 13:14:11.929447 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 13:14:11.929458 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 13:14:11.929469 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 13:14:11.929479 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 13:14:11.929490 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 13:14:11.929501 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 13:14:11.929512 | orchestrator | 2025-06-02 13:14:11.929522 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:14:11.929541 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:11.929553 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:11.929564 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:11.929575 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:11.929586 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:11.929603 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:11.929614 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:11.929625 | orchestrator | 2025-06-02 13:14:11.929636 | orchestrator | 2025-06-02 13:14:11.929647 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:14:11.929657 | orchestrator | Monday 02 June 2025 13:14:08 +0000 (0:00:04.148) 0:00:15.061 *********** 2025-06-02 13:14:11.929668 | orchestrator | =============================================================================== 2025-06-02 13:14:11.929679 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.37s 2025-06-02 13:14:11.929690 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.15s 2025-06-02 13:14:11.929700 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.71s 2025-06-02 13:14:11.929711 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.71s 2025-06-02 13:14:11.929722 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.49s 2025-06-02 13:14:11.930951 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:11.932327 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:11.933839 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:11.940545 | orchestrator | 2025-06-02 13:14:11 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:11.940855 | orchestrator | 2025-06-02 13:14:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:14.979643 | orchestrator | 2025-06-02 13:14:14 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:14.979839 | orchestrator | 2025-06-02 13:14:14 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:14.980230 | orchestrator | 2025-06-02 13:14:14 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:14.982762 | orchestrator | 2025-06-02 13:14:14 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:14.983235 | orchestrator | 2025-06-02 13:14:14 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:14.983910 | orchestrator | 2025-06-02 13:14:14 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:14.984437 | orchestrator | 2025-06-02 13:14:14 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:14.984513 | orchestrator | 2025-06-02 13:14:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:18.059982 | orchestrator | 2025-06-02 13:14:18 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:18.060096 | orchestrator | 2025-06-02 13:14:18 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:18.073492 | orchestrator | 2025-06-02 13:14:18 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:18.073544 | orchestrator | 2025-06-02 13:14:18 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:18.076285 | orchestrator | 2025-06-02 13:14:18 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:18.076702 | orchestrator | 2025-06-02 13:14:18 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:18.077105 | orchestrator | 2025-06-02 13:14:18 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:18.077293 | orchestrator | 2025-06-02 13:14:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:21.121956 | orchestrator | 2025-06-02 13:14:21 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:21.122091 | orchestrator | 2025-06-02 13:14:21 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:21.124447 | orchestrator | 2025-06-02 13:14:21 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:21.125114 | orchestrator | 2025-06-02 13:14:21 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:21.125634 | orchestrator | 2025-06-02 13:14:21 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:21.125994 | orchestrator | 2025-06-02 13:14:21 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:21.126684 | orchestrator | 2025-06-02 13:14:21 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:21.126710 | orchestrator | 2025-06-02 13:14:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:24.173667 | orchestrator | 2025-06-02 13:14:24 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:24.174396 | orchestrator | 2025-06-02 13:14:24 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:24.175163 | orchestrator | 2025-06-02 13:14:24 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:24.175635 | orchestrator | 2025-06-02 13:14:24 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:24.176894 | orchestrator | 2025-06-02 13:14:24 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:24.178927 | orchestrator | 2025-06-02 13:14:24 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:24.179807 | orchestrator | 2025-06-02 13:14:24 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:24.179892 | orchestrator | 2025-06-02 13:14:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:27.220735 | orchestrator | 2025-06-02 13:14:27 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state STARTED 2025-06-02 13:14:27.224935 | orchestrator | 2025-06-02 13:14:27 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:27.228481 | orchestrator | 2025-06-02 13:14:27 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:27.237178 | orchestrator | 2025-06-02 13:14:27 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:27.247691 | orchestrator | 2025-06-02 13:14:27 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:27.247735 | orchestrator | 2025-06-02 13:14:27 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:27.247748 | orchestrator | 2025-06-02 13:14:27 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:27.247759 | orchestrator | 2025-06-02 13:14:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:30.301260 | orchestrator | 2025-06-02 13:14:30 | INFO  | Task b7d4699f-475d-42e3-bb75-12ed33a074be is in state SUCCESS 2025-06-02 13:14:30.301371 | orchestrator | 2025-06-02 13:14:30 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:30.301971 | orchestrator | 2025-06-02 13:14:30 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:30.303409 | orchestrator | 2025-06-02 13:14:30 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:30.306344 | orchestrator | 2025-06-02 13:14:30 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:30.307198 | orchestrator | 2025-06-02 13:14:30 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:30.309678 | orchestrator | 2025-06-02 13:14:30 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:30.309699 | orchestrator | 2025-06-02 13:14:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:33.364276 | orchestrator | 2025-06-02 13:14:33 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:33.367424 | orchestrator | 2025-06-02 13:14:33 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:33.369509 | orchestrator | 2025-06-02 13:14:33 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:33.374106 | orchestrator | 2025-06-02 13:14:33 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:33.375014 | orchestrator | 2025-06-02 13:14:33 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:33.380457 | orchestrator | 2025-06-02 13:14:33 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:33.380479 | orchestrator | 2025-06-02 13:14:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:36.452030 | orchestrator | 2025-06-02 13:14:36 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:36.452159 | orchestrator | 2025-06-02 13:14:36 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:36.463931 | orchestrator | 2025-06-02 13:14:36 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:36.463987 | orchestrator | 2025-06-02 13:14:36 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:36.463999 | orchestrator | 2025-06-02 13:14:36 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:36.464035 | orchestrator | 2025-06-02 13:14:36 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:36.464047 | orchestrator | 2025-06-02 13:14:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:39.522353 | orchestrator | 2025-06-02 13:14:39 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:39.522765 | orchestrator | 2025-06-02 13:14:39 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:39.523600 | orchestrator | 2025-06-02 13:14:39 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:39.524232 | orchestrator | 2025-06-02 13:14:39 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state STARTED 2025-06-02 13:14:39.525018 | orchestrator | 2025-06-02 13:14:39 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:39.526426 | orchestrator | 2025-06-02 13:14:39 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:39.526494 | orchestrator | 2025-06-02 13:14:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:42.567489 | orchestrator | 2025-06-02 13:14:42 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:42.569934 | orchestrator | 2025-06-02 13:14:42 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:42.571072 | orchestrator | 2025-06-02 13:14:42 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:42.572424 | orchestrator | 2025-06-02 13:14:42 | INFO  | Task 4911a5a0-e719-4ec9-ba0f-0b27e74c5a22 is in state SUCCESS 2025-06-02 13:14:42.575779 | orchestrator | 2025-06-02 13:14:42 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:42.575829 | orchestrator | 2025-06-02 13:14:42 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:42.575847 | orchestrator | 2025-06-02 13:14:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:45.617105 | orchestrator | 2025-06-02 13:14:45 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:45.617264 | orchestrator | 2025-06-02 13:14:45 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:45.618435 | orchestrator | 2025-06-02 13:14:45 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:45.619813 | orchestrator | 2025-06-02 13:14:45 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:45.622636 | orchestrator | 2025-06-02 13:14:45 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:45.622670 | orchestrator | 2025-06-02 13:14:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:48.665351 | orchestrator | 2025-06-02 13:14:48 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:48.667114 | orchestrator | 2025-06-02 13:14:48 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:48.668398 | orchestrator | 2025-06-02 13:14:48 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:48.672543 | orchestrator | 2025-06-02 13:14:48 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:48.673426 | orchestrator | 2025-06-02 13:14:48 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:48.673451 | orchestrator | 2025-06-02 13:14:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:51.717854 | orchestrator | 2025-06-02 13:14:51 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:51.718869 | orchestrator | 2025-06-02 13:14:51 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:51.722005 | orchestrator | 2025-06-02 13:14:51 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:51.724685 | orchestrator | 2025-06-02 13:14:51 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:51.725725 | orchestrator | 2025-06-02 13:14:51 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:51.725748 | orchestrator | 2025-06-02 13:14:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:54.765232 | orchestrator | 2025-06-02 13:14:54 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:54.766140 | orchestrator | 2025-06-02 13:14:54 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:54.768406 | orchestrator | 2025-06-02 13:14:54 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state STARTED 2025-06-02 13:14:54.773096 | orchestrator | 2025-06-02 13:14:54 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:54.774453 | orchestrator | 2025-06-02 13:14:54 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:54.774894 | orchestrator | 2025-06-02 13:14:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:14:57.811345 | orchestrator | 2025-06-02 13:14:57 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:14:57.813185 | orchestrator | 2025-06-02 13:14:57 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:14:57.822158 | orchestrator | 2025-06-02 13:14:57.822230 | orchestrator | 2025-06-02 13:14:57.822244 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-02 13:14:57.822256 | orchestrator | 2025-06-02 13:14:57.822267 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-02 13:14:57.822279 | orchestrator | Monday 02 June 2025 13:13:54 +0000 (0:00:00.450) 0:00:00.450 *********** 2025-06-02 13:14:57.822291 | orchestrator | ok: [testbed-manager] => { 2025-06-02 13:14:57.822304 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-02 13:14:57.822317 | orchestrator | } 2025-06-02 13:14:57.822328 | orchestrator | 2025-06-02 13:14:57.822339 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-02 13:14:57.822350 | orchestrator | Monday 02 June 2025 13:13:54 +0000 (0:00:00.524) 0:00:00.975 *********** 2025-06-02 13:14:57.822361 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.822374 | orchestrator | 2025-06-02 13:14:57.822384 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-02 13:14:57.822395 | orchestrator | Monday 02 June 2025 13:13:56 +0000 (0:00:01.723) 0:00:02.698 *********** 2025-06-02 13:14:57.822406 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-02 13:14:57.822417 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-02 13:14:57.822428 | orchestrator | 2025-06-02 13:14:57.822439 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-02 13:14:57.822450 | orchestrator | Monday 02 June 2025 13:13:57 +0000 (0:00:01.218) 0:00:03.916 *********** 2025-06-02 13:14:57.822461 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.822471 | orchestrator | 2025-06-02 13:14:57.822482 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-02 13:14:57.822493 | orchestrator | Monday 02 June 2025 13:13:59 +0000 (0:00:01.916) 0:00:05.832 *********** 2025-06-02 13:14:57.822503 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.822514 | orchestrator | 2025-06-02 13:14:57.822525 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-02 13:14:57.822535 | orchestrator | Monday 02 June 2025 13:14:01 +0000 (0:00:01.390) 0:00:07.223 *********** 2025-06-02 13:14:57.822546 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-02 13:14:57.822557 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.822568 | orchestrator | 2025-06-02 13:14:57.822579 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-02 13:14:57.822590 | orchestrator | Monday 02 June 2025 13:14:25 +0000 (0:00:24.356) 0:00:31.579 *********** 2025-06-02 13:14:57.822600 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.822611 | orchestrator | 2025-06-02 13:14:57.822622 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:14:57.822633 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.822645 | orchestrator | 2025-06-02 13:14:57.822656 | orchestrator | 2025-06-02 13:14:57.822669 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:14:57.822681 | orchestrator | Monday 02 June 2025 13:14:27 +0000 (0:00:01.714) 0:00:33.293 *********** 2025-06-02 13:14:57.822695 | orchestrator | =============================================================================== 2025-06-02 13:14:57.822707 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.35s 2025-06-02 13:14:57.822739 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.92s 2025-06-02 13:14:57.822750 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.72s 2025-06-02 13:14:57.822761 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.71s 2025-06-02 13:14:57.822771 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.39s 2025-06-02 13:14:57.822782 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.22s 2025-06-02 13:14:57.822792 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.52s 2025-06-02 13:14:57.822803 | orchestrator | 2025-06-02 13:14:57.822814 | orchestrator | 2025-06-02 13:14:57.822824 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-02 13:14:57.822835 | orchestrator | 2025-06-02 13:14:57.822846 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-02 13:14:57.822856 | orchestrator | Monday 02 June 2025 13:13:53 +0000 (0:00:00.586) 0:00:00.586 *********** 2025-06-02 13:14:57.822867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-02 13:14:57.822879 | orchestrator | 2025-06-02 13:14:57.822889 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-02 13:14:57.822907 | orchestrator | Monday 02 June 2025 13:13:53 +0000 (0:00:00.452) 0:00:01.038 *********** 2025-06-02 13:14:57.822918 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-02 13:14:57.822929 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-02 13:14:57.822939 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-02 13:14:57.822950 | orchestrator | 2025-06-02 13:14:57.822961 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-02 13:14:57.822972 | orchestrator | Monday 02 June 2025 13:13:55 +0000 (0:00:02.014) 0:00:03.053 *********** 2025-06-02 13:14:57.822982 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.822993 | orchestrator | 2025-06-02 13:14:57.823003 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-02 13:14:57.823014 | orchestrator | Monday 02 June 2025 13:13:57 +0000 (0:00:01.949) 0:00:05.003 *********** 2025-06-02 13:14:57.823038 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-02 13:14:57.823049 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.823060 | orchestrator | 2025-06-02 13:14:57.823070 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-02 13:14:57.823081 | orchestrator | Monday 02 June 2025 13:14:33 +0000 (0:00:36.196) 0:00:41.200 *********** 2025-06-02 13:14:57.823092 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.823102 | orchestrator | 2025-06-02 13:14:57.823113 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-02 13:14:57.823123 | orchestrator | Monday 02 June 2025 13:14:34 +0000 (0:00:00.851) 0:00:42.051 *********** 2025-06-02 13:14:57.823134 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.823145 | orchestrator | 2025-06-02 13:14:57.823155 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-02 13:14:57.823194 | orchestrator | Monday 02 June 2025 13:14:35 +0000 (0:00:00.844) 0:00:42.896 *********** 2025-06-02 13:14:57.823207 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.823218 | orchestrator | 2025-06-02 13:14:57.823228 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-02 13:14:57.823239 | orchestrator | Monday 02 June 2025 13:14:38 +0000 (0:00:02.944) 0:00:45.841 *********** 2025-06-02 13:14:57.823250 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.823260 | orchestrator | 2025-06-02 13:14:57.823271 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-02 13:14:57.823289 | orchestrator | Monday 02 June 2025 13:14:39 +0000 (0:00:01.260) 0:00:47.102 *********** 2025-06-02 13:14:57.823300 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.823311 | orchestrator | 2025-06-02 13:14:57.823322 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-02 13:14:57.823332 | orchestrator | Monday 02 June 2025 13:14:40 +0000 (0:00:01.055) 0:00:48.157 *********** 2025-06-02 13:14:57.823343 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.823354 | orchestrator | 2025-06-02 13:14:57.823365 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:14:57.823376 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.823393 | orchestrator | 2025-06-02 13:14:57.823412 | orchestrator | 2025-06-02 13:14:57.823423 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:14:57.823434 | orchestrator | Monday 02 June 2025 13:14:41 +0000 (0:00:00.605) 0:00:48.763 *********** 2025-06-02 13:14:57.823445 | orchestrator | =============================================================================== 2025-06-02 13:14:57.823455 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.20s 2025-06-02 13:14:57.823466 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.94s 2025-06-02 13:14:57.823476 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.02s 2025-06-02 13:14:57.823487 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.95s 2025-06-02 13:14:57.823497 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.26s 2025-06-02 13:14:57.823508 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.06s 2025-06-02 13:14:57.823518 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.85s 2025-06-02 13:14:57.823529 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.84s 2025-06-02 13:14:57.823539 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.61s 2025-06-02 13:14:57.823550 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.45s 2025-06-02 13:14:57.823560 | orchestrator | 2025-06-02 13:14:57.823571 | orchestrator | 2025-06-02 13:14:57.823582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:14:57.823592 | orchestrator | 2025-06-02 13:14:57.823603 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:14:57.823613 | orchestrator | Monday 02 June 2025 13:13:53 +0000 (0:00:00.483) 0:00:00.483 *********** 2025-06-02 13:14:57.823624 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-02 13:14:57.823634 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-02 13:14:57.823645 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-02 13:14:57.823655 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-02 13:14:57.823666 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-02 13:14:57.823677 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-02 13:14:57.823687 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-02 13:14:57.823698 | orchestrator | 2025-06-02 13:14:57.823708 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-02 13:14:57.823719 | orchestrator | 2025-06-02 13:14:57.823729 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-02 13:14:57.823740 | orchestrator | Monday 02 June 2025 13:13:55 +0000 (0:00:02.362) 0:00:02.846 *********** 2025-06-02 13:14:57.823765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:14:57.823779 | orchestrator | 2025-06-02 13:14:57.823796 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-02 13:14:57.823807 | orchestrator | Monday 02 June 2025 13:13:58 +0000 (0:00:02.473) 0:00:05.320 *********** 2025-06-02 13:14:57.823818 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.823828 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:14:57.823839 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:14:57.823849 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:14:57.823860 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:14:57.823877 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:14:57.823888 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:14:57.823899 | orchestrator | 2025-06-02 13:14:57.823910 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-02 13:14:57.823920 | orchestrator | Monday 02 June 2025 13:14:00 +0000 (0:00:02.220) 0:00:07.540 *********** 2025-06-02 13:14:57.823931 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.823941 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:14:57.823952 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:14:57.823963 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:14:57.823973 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:14:57.823984 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:14:57.823994 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:14:57.824005 | orchestrator | 2025-06-02 13:14:57.824015 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-02 13:14:57.824026 | orchestrator | Monday 02 June 2025 13:14:04 +0000 (0:00:03.777) 0:00:11.318 *********** 2025-06-02 13:14:57.824036 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:14:57.824047 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:14:57.824057 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.824068 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:14:57.824079 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:14:57.824089 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:14:57.824100 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:14:57.824110 | orchestrator | 2025-06-02 13:14:57.824121 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-02 13:14:57.824131 | orchestrator | Monday 02 June 2025 13:14:07 +0000 (0:00:03.292) 0:00:14.610 *********** 2025-06-02 13:14:57.824142 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.824152 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:14:57.824163 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:14:57.824229 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:14:57.824240 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:14:57.824251 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:14:57.824262 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:14:57.824272 | orchestrator | 2025-06-02 13:14:57.824283 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-02 13:14:57.824294 | orchestrator | Monday 02 June 2025 13:14:16 +0000 (0:00:09.221) 0:00:23.832 *********** 2025-06-02 13:14:57.824304 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.824315 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:14:57.824326 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:14:57.824336 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:14:57.824345 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:14:57.824355 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:14:57.824364 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:14:57.824374 | orchestrator | 2025-06-02 13:14:57.824383 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-02 13:14:57.824393 | orchestrator | Monday 02 June 2025 13:14:35 +0000 (0:00:18.256) 0:00:42.088 *********** 2025-06-02 13:14:57.824403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:14:57.824414 | orchestrator | 2025-06-02 13:14:57.824424 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-02 13:14:57.824439 | orchestrator | Monday 02 June 2025 13:14:37 +0000 (0:00:02.114) 0:00:44.203 *********** 2025-06-02 13:14:57.824448 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-02 13:14:57.824458 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-02 13:14:57.824468 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-02 13:14:57.824481 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-02 13:14:57.824500 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-02 13:14:57.824516 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-02 13:14:57.824534 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-02 13:14:57.824552 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-02 13:14:57.824571 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-02 13:14:57.824590 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-02 13:14:57.824607 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-02 13:14:57.824618 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-02 13:14:57.824627 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-02 13:14:57.824636 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-02 13:14:57.824646 | orchestrator | 2025-06-02 13:14:57.824689 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-02 13:14:57.824700 | orchestrator | Monday 02 June 2025 13:14:43 +0000 (0:00:06.405) 0:00:50.608 *********** 2025-06-02 13:14:57.824710 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.824719 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:14:57.824733 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:14:57.824742 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:14:57.824752 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:14:57.824761 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:14:57.824771 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:14:57.824780 | orchestrator | 2025-06-02 13:14:57.824789 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-02 13:14:57.824799 | orchestrator | Monday 02 June 2025 13:14:44 +0000 (0:00:01.280) 0:00:51.889 *********** 2025-06-02 13:14:57.824809 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:14:57.824818 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.824827 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:14:57.824837 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:14:57.824846 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:14:57.824855 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:14:57.824865 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:14:57.824874 | orchestrator | 2025-06-02 13:14:57.824884 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-02 13:14:57.824902 | orchestrator | Monday 02 June 2025 13:14:46 +0000 (0:00:01.637) 0:00:53.526 *********** 2025-06-02 13:14:57.824912 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:14:57.824921 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:14:57.824930 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:14:57.824940 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.824949 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:14:57.824959 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:14:57.824968 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:14:57.824978 | orchestrator | 2025-06-02 13:14:57.824987 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-02 13:14:57.824997 | orchestrator | Monday 02 June 2025 13:14:48 +0000 (0:00:01.555) 0:00:55.081 *********** 2025-06-02 13:14:57.825006 | orchestrator | ok: [testbed-manager] 2025-06-02 13:14:57.825016 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:14:57.825025 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:14:57.825035 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:14:57.825044 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:14:57.825053 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:14:57.825062 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:14:57.825079 | orchestrator | 2025-06-02 13:14:57.825089 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-02 13:14:57.825099 | orchestrator | Monday 02 June 2025 13:14:49 +0000 (0:00:01.714) 0:00:56.795 *********** 2025-06-02 13:14:57.825108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-02 13:14:57.825120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:14:57.825130 | orchestrator | 2025-06-02 13:14:57.825139 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-02 13:14:57.825149 | orchestrator | Monday 02 June 2025 13:14:51 +0000 (0:00:01.392) 0:00:58.188 *********** 2025-06-02 13:14:57.825158 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.825187 | orchestrator | 2025-06-02 13:14:57.825197 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-02 13:14:57.825206 | orchestrator | Monday 02 June 2025 13:14:53 +0000 (0:00:01.872) 0:01:00.060 *********** 2025-06-02 13:14:57.825216 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:14:57.825225 | orchestrator | changed: [testbed-manager] 2025-06-02 13:14:57.825234 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:14:57.825244 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:14:57.825253 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:14:57.825263 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:14:57.825272 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:14:57.825282 | orchestrator | 2025-06-02 13:14:57.825291 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:14:57.825301 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.825311 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.825321 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.825330 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.825340 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.825350 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.825359 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:14:57.825369 | orchestrator | 2025-06-02 13:14:57.825378 | orchestrator | 2025-06-02 13:14:57.825387 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:14:57.825397 | orchestrator | Monday 02 June 2025 13:14:56 +0000 (0:00:03.266) 0:01:03.327 *********** 2025-06-02 13:14:57.825407 | orchestrator | =============================================================================== 2025-06-02 13:14:57.825416 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.26s 2025-06-02 13:14:57.825426 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.22s 2025-06-02 13:14:57.825439 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.41s 2025-06-02 13:14:57.825449 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.78s 2025-06-02 13:14:57.825459 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.29s 2025-06-02 13:14:57.825468 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.27s 2025-06-02 13:14:57.825483 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.47s 2025-06-02 13:14:57.825492 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.36s 2025-06-02 13:14:57.825502 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.22s 2025-06-02 13:14:57.825511 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.11s 2025-06-02 13:14:57.825521 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.87s 2025-06-02 13:14:57.825535 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.71s 2025-06-02 13:14:57.825545 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.64s 2025-06-02 13:14:57.825554 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.56s 2025-06-02 13:14:57.825564 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.39s 2025-06-02 13:14:57.825573 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.28s 2025-06-02 13:14:57.825610 | orchestrator | 2025-06-02 13:14:57 | INFO  | Task 52dd86dc-7390-4786-a3d9-33a86269857f is in state SUCCESS 2025-06-02 13:14:57.825722 | orchestrator | 2025-06-02 13:14:57 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:14:57.825736 | orchestrator | 2025-06-02 13:14:57 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:14:57.825746 | orchestrator | 2025-06-02 13:14:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:00.869903 | orchestrator | 2025-06-02 13:15:00 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:00.871579 | orchestrator | 2025-06-02 13:15:00 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:00.872201 | orchestrator | 2025-06-02 13:15:00 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:00.875229 | orchestrator | 2025-06-02 13:15:00 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:00.875258 | orchestrator | 2025-06-02 13:15:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:03.916362 | orchestrator | 2025-06-02 13:15:03 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:03.916510 | orchestrator | 2025-06-02 13:15:03 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:03.922766 | orchestrator | 2025-06-02 13:15:03 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:03.924342 | orchestrator | 2025-06-02 13:15:03 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:03.925004 | orchestrator | 2025-06-02 13:15:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:06.964810 | orchestrator | 2025-06-02 13:15:06 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:06.966113 | orchestrator | 2025-06-02 13:15:06 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:06.980455 | orchestrator | 2025-06-02 13:15:06 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:06.980553 | orchestrator | 2025-06-02 13:15:06 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:06.980568 | orchestrator | 2025-06-02 13:15:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:10.017765 | orchestrator | 2025-06-02 13:15:10 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:10.019052 | orchestrator | 2025-06-02 13:15:10 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:10.020043 | orchestrator | 2025-06-02 13:15:10 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:10.021765 | orchestrator | 2025-06-02 13:15:10 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:10.021930 | orchestrator | 2025-06-02 13:15:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:13.065341 | orchestrator | 2025-06-02 13:15:13 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:13.068120 | orchestrator | 2025-06-02 13:15:13 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:13.070446 | orchestrator | 2025-06-02 13:15:13 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:13.073557 | orchestrator | 2025-06-02 13:15:13 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:13.075386 | orchestrator | 2025-06-02 13:15:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:16.129465 | orchestrator | 2025-06-02 13:15:16 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:16.130234 | orchestrator | 2025-06-02 13:15:16 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:16.132139 | orchestrator | 2025-06-02 13:15:16 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:16.134818 | orchestrator | 2025-06-02 13:15:16 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:16.134941 | orchestrator | 2025-06-02 13:15:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:19.225621 | orchestrator | 2025-06-02 13:15:19 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:19.228303 | orchestrator | 2025-06-02 13:15:19 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:19.230298 | orchestrator | 2025-06-02 13:15:19 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:19.233149 | orchestrator | 2025-06-02 13:15:19 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:19.233614 | orchestrator | 2025-06-02 13:15:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:22.284783 | orchestrator | 2025-06-02 13:15:22 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:22.285395 | orchestrator | 2025-06-02 13:15:22 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:22.286908 | orchestrator | 2025-06-02 13:15:22 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:22.287742 | orchestrator | 2025-06-02 13:15:22 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:22.287898 | orchestrator | 2025-06-02 13:15:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:25.320572 | orchestrator | 2025-06-02 13:15:25 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:25.321245 | orchestrator | 2025-06-02 13:15:25 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:25.322505 | orchestrator | 2025-06-02 13:15:25 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:25.323142 | orchestrator | 2025-06-02 13:15:25 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state STARTED 2025-06-02 13:15:25.323264 | orchestrator | 2025-06-02 13:15:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:28.357753 | orchestrator | 2025-06-02 13:15:28 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:28.357845 | orchestrator | 2025-06-02 13:15:28 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:28.358863 | orchestrator | 2025-06-02 13:15:28 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:28.359679 | orchestrator | 2025-06-02 13:15:28 | INFO  | Task 20119c7d-a973-4bad-9563-54266673d4f6 is in state SUCCESS 2025-06-02 13:15:28.359699 | orchestrator | 2025-06-02 13:15:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:31.411795 | orchestrator | 2025-06-02 13:15:31 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:31.412494 | orchestrator | 2025-06-02 13:15:31 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:31.415060 | orchestrator | 2025-06-02 13:15:31 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:31.415122 | orchestrator | 2025-06-02 13:15:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:34.458504 | orchestrator | 2025-06-02 13:15:34 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:34.460371 | orchestrator | 2025-06-02 13:15:34 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:34.461426 | orchestrator | 2025-06-02 13:15:34 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:34.462167 | orchestrator | 2025-06-02 13:15:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:37.510729 | orchestrator | 2025-06-02 13:15:37 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:37.512848 | orchestrator | 2025-06-02 13:15:37 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:37.514977 | orchestrator | 2025-06-02 13:15:37 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:37.515082 | orchestrator | 2025-06-02 13:15:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:40.551862 | orchestrator | 2025-06-02 13:15:40 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:40.553246 | orchestrator | 2025-06-02 13:15:40 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:40.554609 | orchestrator | 2025-06-02 13:15:40 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:40.554756 | orchestrator | 2025-06-02 13:15:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:43.597155 | orchestrator | 2025-06-02 13:15:43 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:43.604911 | orchestrator | 2025-06-02 13:15:43 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:43.608970 | orchestrator | 2025-06-02 13:15:43 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:43.610003 | orchestrator | 2025-06-02 13:15:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:46.657236 | orchestrator | 2025-06-02 13:15:46 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:46.657746 | orchestrator | 2025-06-02 13:15:46 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:46.658873 | orchestrator | 2025-06-02 13:15:46 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:46.658899 | orchestrator | 2025-06-02 13:15:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:49.704610 | orchestrator | 2025-06-02 13:15:49 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:49.706103 | orchestrator | 2025-06-02 13:15:49 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:49.708146 | orchestrator | 2025-06-02 13:15:49 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:49.708192 | orchestrator | 2025-06-02 13:15:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:52.760068 | orchestrator | 2025-06-02 13:15:52 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:52.760950 | orchestrator | 2025-06-02 13:15:52 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:52.762466 | orchestrator | 2025-06-02 13:15:52 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:52.763835 | orchestrator | 2025-06-02 13:15:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:55.808015 | orchestrator | 2025-06-02 13:15:55 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:55.809629 | orchestrator | 2025-06-02 13:15:55 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:55.812198 | orchestrator | 2025-06-02 13:15:55 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:55.812232 | orchestrator | 2025-06-02 13:15:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:15:58.857616 | orchestrator | 2025-06-02 13:15:58 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:15:58.860537 | orchestrator | 2025-06-02 13:15:58 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:15:58.862555 | orchestrator | 2025-06-02 13:15:58 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:15:58.862638 | orchestrator | 2025-06-02 13:15:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:01.908104 | orchestrator | 2025-06-02 13:16:01 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:01.909699 | orchestrator | 2025-06-02 13:16:01 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state STARTED 2025-06-02 13:16:01.911517 | orchestrator | 2025-06-02 13:16:01 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:01.911740 | orchestrator | 2025-06-02 13:16:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:04.957240 | orchestrator | 2025-06-02 13:16:04 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:04.957404 | orchestrator | 2025-06-02 13:16:04 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:04.961324 | orchestrator | 2025-06-02 13:16:04.961356 | orchestrator | 2025-06-02 13:16:04.961370 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-02 13:16:04.961382 | orchestrator | 2025-06-02 13:16:04.961393 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-02 13:16:04.961405 | orchestrator | Monday 02 June 2025 13:14:14 +0000 (0:00:00.184) 0:00:00.184 *********** 2025-06-02 13:16:04.961418 | orchestrator | ok: [testbed-manager] 2025-06-02 13:16:04.961431 | orchestrator | 2025-06-02 13:16:04.961442 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-02 13:16:04.961453 | orchestrator | Monday 02 June 2025 13:14:14 +0000 (0:00:00.697) 0:00:00.881 *********** 2025-06-02 13:16:04.961465 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-02 13:16:04.961502 | orchestrator | 2025-06-02 13:16:04.961514 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-02 13:16:04.961525 | orchestrator | Monday 02 June 2025 13:14:15 +0000 (0:00:00.786) 0:00:01.667 *********** 2025-06-02 13:16:04.961578 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.961590 | orchestrator | 2025-06-02 13:16:04.961601 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-02 13:16:04.961612 | orchestrator | Monday 02 June 2025 13:14:16 +0000 (0:00:01.181) 0:00:02.849 *********** 2025-06-02 13:16:04.961623 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-02 13:16:04.961634 | orchestrator | ok: [testbed-manager] 2025-06-02 13:16:04.961645 | orchestrator | 2025-06-02 13:16:04.961719 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-02 13:16:04.961733 | orchestrator | Monday 02 June 2025 13:15:24 +0000 (0:01:07.319) 0:01:10.168 *********** 2025-06-02 13:16:04.961744 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.961755 | orchestrator | 2025-06-02 13:16:04.961766 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:16:04.961777 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:16:04.961789 | orchestrator | 2025-06-02 13:16:04.961838 | orchestrator | 2025-06-02 13:16:04.961849 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:16:04.961860 | orchestrator | Monday 02 June 2025 13:15:27 +0000 (0:00:03.572) 0:01:13.741 *********** 2025-06-02 13:16:04.961870 | orchestrator | =============================================================================== 2025-06-02 13:16:04.961881 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 67.32s 2025-06-02 13:16:04.961891 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.57s 2025-06-02 13:16:04.961902 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.18s 2025-06-02 13:16:04.961913 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.79s 2025-06-02 13:16:04.961924 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.70s 2025-06-02 13:16:04.961935 | orchestrator | 2025-06-02 13:16:04.961975 | orchestrator | 2025-06-02 13:16:04 | INFO  | Task 9e8dcd56-dff0-40cf-a6d2-74da1f1d8747 is in state SUCCESS 2025-06-02 13:16:04.963741 | orchestrator | 2025-06-02 13:16:04.963784 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-02 13:16:04.963796 | orchestrator | 2025-06-02 13:16:04.963808 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 13:16:04.963819 | orchestrator | Monday 02 June 2025 13:13:47 +0000 (0:00:00.203) 0:00:00.203 *********** 2025-06-02 13:16:04.963831 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:16:04.963844 | orchestrator | 2025-06-02 13:16:04.963855 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-02 13:16:04.963866 | orchestrator | Monday 02 June 2025 13:13:48 +0000 (0:00:01.069) 0:00:01.272 *********** 2025-06-02 13:16:04.963877 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 13:16:04.963888 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 13:16:04.963899 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 13:16:04.963910 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 13:16:04.963921 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 13:16:04.963932 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 13:16:04.963943 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 13:16:04.963986 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 13:16:04.963999 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 13:16:04.964010 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 13:16:04.964021 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 13:16:04.964031 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 13:16:04.964048 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 13:16:04.964059 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 13:16:04.964070 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 13:16:04.964081 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 13:16:04.964092 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 13:16:04.964103 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 13:16:04.964114 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 13:16:04.964125 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 13:16:04.964135 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 13:16:04.964146 | orchestrator | 2025-06-02 13:16:04.964157 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 13:16:04.964168 | orchestrator | Monday 02 June 2025 13:13:52 +0000 (0:00:04.344) 0:00:05.617 *********** 2025-06-02 13:16:04.964179 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:16:04.964191 | orchestrator | 2025-06-02 13:16:04.964202 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-02 13:16:04.964213 | orchestrator | Monday 02 June 2025 13:13:53 +0000 (0:00:01.171) 0:00:06.788 *********** 2025-06-02 13:16:04.964228 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.964245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.964272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.964329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.964344 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.964370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.964391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.964465 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.964629 | orchestrator | 2025-06-02 13:16:04.964642 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-02 13:16:04.964655 | orchestrator | Monday 02 June 2025 13:13:58 +0000 (0:00:04.852) 0:00:11.640 *********** 2025-06-02 13:16:04.964669 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.964681 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964693 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964704 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:16:04.964722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.964740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.964780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:16:04.964813 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:16:04.964825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.964836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:16:04.964893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.964978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.964996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965008 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:16:04.965019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965075 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:16:04.965103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965138 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:16:04.965149 | orchestrator | 2025-06-02 13:16:04.965159 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-02 13:16:04.965170 | orchestrator | Monday 02 June 2025 13:13:59 +0000 (0:00:01.266) 0:00:12.907 *********** 2025-06-02 13:16:04.965187 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965199 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965210 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965229 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:16:04.965240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965280 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:16:04.965350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965392 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:16:04.965403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965443 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:16:04.965463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965497 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:16:04.965509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965548 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:16:04.965560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 13:16:04.965582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.965605 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:16:04.965616 | orchestrator | 2025-06-02 13:16:04.965627 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-02 13:16:04.965638 | orchestrator | Monday 02 June 2025 13:14:02 +0000 (0:00:02.200) 0:00:15.108 *********** 2025-06-02 13:16:04.965648 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:16:04.965659 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:16:04.965670 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:16:04.965681 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:16:04.965691 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:16:04.965702 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:16:04.965713 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:16:04.965723 | orchestrator | 2025-06-02 13:16:04.965734 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-02 13:16:04.965744 | orchestrator | Monday 02 June 2025 13:14:03 +0000 (0:00:00.978) 0:00:16.086 *********** 2025-06-02 13:16:04.965753 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:16:04.965763 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:16:04.965772 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:16:04.965782 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:16:04.965791 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:16:04.965801 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:16:04.965810 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:16:04.965820 | orchestrator | 2025-06-02 13:16:04.965829 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-02 13:16:04.965839 | orchestrator | Monday 02 June 2025 13:14:04 +0000 (0:00:01.029) 0:00:17.115 *********** 2025-06-02 13:16:04.965861 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.965871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.965882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.965892 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.965912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.965923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.965933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.965964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.965975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.965986 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.965996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.966011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966108 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966193 | orchestrator | 2025-06-02 13:16:04.966202 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-02 13:16:04.966212 | orchestrator | Monday 02 June 2025 13:14:09 +0000 (0:00:05.463) 0:00:22.579 *********** 2025-06-02 13:16:04.966222 | orchestrator | [WARNING]: Skipped 2025-06-02 13:16:04.966232 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-02 13:16:04.966248 | orchestrator | to this access issue: 2025-06-02 13:16:04.966258 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-02 13:16:04.966267 | orchestrator | directory 2025-06-02 13:16:04.966277 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:16:04.966303 | orchestrator | 2025-06-02 13:16:04.966313 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-02 13:16:04.966323 | orchestrator | Monday 02 June 2025 13:14:10 +0000 (0:00:01.384) 0:00:23.964 *********** 2025-06-02 13:16:04.966332 | orchestrator | [WARNING]: Skipped 2025-06-02 13:16:04.966342 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-02 13:16:04.966351 | orchestrator | to this access issue: 2025-06-02 13:16:04.966361 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-02 13:16:04.966370 | orchestrator | directory 2025-06-02 13:16:04.966380 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:16:04.966389 | orchestrator | 2025-06-02 13:16:04.966398 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-02 13:16:04.966408 | orchestrator | Monday 02 June 2025 13:14:12 +0000 (0:00:01.327) 0:00:25.291 *********** 2025-06-02 13:16:04.966421 | orchestrator | [WARNING]: Skipped 2025-06-02 13:16:04.966431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-02 13:16:04.966441 | orchestrator | to this access issue: 2025-06-02 13:16:04.966450 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-02 13:16:04.966460 | orchestrator | directory 2025-06-02 13:16:04.966469 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:16:04.966479 | orchestrator | 2025-06-02 13:16:04.966488 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-02 13:16:04.966498 | orchestrator | Monday 02 June 2025 13:14:13 +0000 (0:00:01.084) 0:00:26.375 *********** 2025-06-02 13:16:04.966507 | orchestrator | [WARNING]: Skipped 2025-06-02 13:16:04.966517 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-02 13:16:04.966526 | orchestrator | to this access issue: 2025-06-02 13:16:04.966536 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-02 13:16:04.966545 | orchestrator | directory 2025-06-02 13:16:04.966555 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:16:04.966565 | orchestrator | 2025-06-02 13:16:04.966574 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-02 13:16:04.966584 | orchestrator | Monday 02 June 2025 13:14:14 +0000 (0:00:00.657) 0:00:27.032 *********** 2025-06-02 13:16:04.966593 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.966603 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:04.966612 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:04.966621 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:04.966631 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:16:04.966640 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:16:04.966649 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:16:04.966659 | orchestrator | 2025-06-02 13:16:04.966668 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-02 13:16:04.966678 | orchestrator | Monday 02 June 2025 13:14:17 +0000 (0:00:03.276) 0:00:30.309 *********** 2025-06-02 13:16:04.966688 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 13:16:04.966697 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 13:16:04.966707 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 13:16:04.966716 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 13:16:04.966726 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 13:16:04.966741 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 13:16:04.966751 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 13:16:04.966760 | orchestrator | 2025-06-02 13:16:04.966770 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-02 13:16:04.966779 | orchestrator | Monday 02 June 2025 13:14:20 +0000 (0:00:02.829) 0:00:33.139 *********** 2025-06-02 13:16:04.966789 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.966798 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:04.966808 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:04.966818 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:04.966832 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:16:04.966842 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:16:04.966851 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:16:04.966861 | orchestrator | 2025-06-02 13:16:04.966870 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-02 13:16:04.966880 | orchestrator | Monday 02 June 2025 13:14:22 +0000 (0:00:02.558) 0:00:35.697 *********** 2025-06-02 13:16:04.966890 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.966900 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.966915 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.966937 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.966948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.966992 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.967031 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967042 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967052 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.967077 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.967103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967119 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.967189 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967204 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:16:04.967224 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967241 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967251 | orchestrator | 2025-06-02 13:16:04.967261 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-02 13:16:04.967271 | orchestrator | Monday 02 June 2025 13:14:25 +0000 (0:00:02.664) 0:00:38.361 *********** 2025-06-02 13:16:04.967281 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 13:16:04.967337 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 13:16:04.967349 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 13:16:04.967359 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 13:16:04.967368 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 13:16:04.967378 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 13:16:04.967387 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 13:16:04.967397 | orchestrator | 2025-06-02 13:16:04.967414 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-02 13:16:04.967424 | orchestrator | Monday 02 June 2025 13:14:27 +0000 (0:00:02.086) 0:00:40.448 *********** 2025-06-02 13:16:04.967434 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 13:16:04.967443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 13:16:04.967453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 13:16:04.967462 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 13:16:04.967472 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 13:16:04.967481 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 13:16:04.967491 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 13:16:04.967500 | orchestrator | 2025-06-02 13:16:04.967509 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-02 13:16:04.967519 | orchestrator | Monday 02 June 2025 13:14:29 +0000 (0:00:02.014) 0:00:42.462 *********** 2025-06-02 13:16:04.967529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967539 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967610 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967661 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 13:16:04.967707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:16:04.967784 | orchestrator | 2025-06-02 13:16:04.967792 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-02 13:16:04.967800 | orchestrator | Monday 02 June 2025 13:14:32 +0000 (0:00:03.488) 0:00:45.951 *********** 2025-06-02 13:16:04.967812 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.967821 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:04.967829 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:04.967837 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:04.967844 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:16:04.967852 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:16:04.967860 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:16:04.967868 | orchestrator | 2025-06-02 13:16:04.967876 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-02 13:16:04.967884 | orchestrator | Monday 02 June 2025 13:14:34 +0000 (0:00:01.630) 0:00:47.582 *********** 2025-06-02 13:16:04.967891 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.967899 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:04.967907 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:04.967914 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:04.967922 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:16:04.967930 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:16:04.967942 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:16:04.967950 | orchestrator | 2025-06-02 13:16:04.967958 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 13:16:04.967966 | orchestrator | Monday 02 June 2025 13:14:36 +0000 (0:00:01.521) 0:00:49.103 *********** 2025-06-02 13:16:04.967974 | orchestrator | 2025-06-02 13:16:04.967981 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 13:16:04.967989 | orchestrator | Monday 02 June 2025 13:14:36 +0000 (0:00:00.171) 0:00:49.275 *********** 2025-06-02 13:16:04.967997 | orchestrator | 2025-06-02 13:16:04.968005 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 13:16:04.968012 | orchestrator | Monday 02 June 2025 13:14:36 +0000 (0:00:00.116) 0:00:49.392 *********** 2025-06-02 13:16:04.968020 | orchestrator | 2025-06-02 13:16:04.968028 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 13:16:04.968035 | orchestrator | Monday 02 June 2025 13:14:36 +0000 (0:00:00.105) 0:00:49.497 *********** 2025-06-02 13:16:04.968043 | orchestrator | 2025-06-02 13:16:04.968051 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 13:16:04.968059 | orchestrator | Monday 02 June 2025 13:14:36 +0000 (0:00:00.089) 0:00:49.587 *********** 2025-06-02 13:16:04.968066 | orchestrator | 2025-06-02 13:16:04.968074 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 13:16:04.968085 | orchestrator | Monday 02 June 2025 13:14:37 +0000 (0:00:00.516) 0:00:50.104 *********** 2025-06-02 13:16:04.968093 | orchestrator | 2025-06-02 13:16:04.968101 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 13:16:04.968109 | orchestrator | Monday 02 June 2025 13:14:37 +0000 (0:00:00.074) 0:00:50.178 *********** 2025-06-02 13:16:04.968117 | orchestrator | 2025-06-02 13:16:04.968124 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-02 13:16:04.968132 | orchestrator | Monday 02 June 2025 13:14:37 +0000 (0:00:00.096) 0:00:50.274 *********** 2025-06-02 13:16:04.968140 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:04.968148 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.968155 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:16:04.968163 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:16:04.968171 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:04.968178 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:04.968186 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:16:04.968194 | orchestrator | 2025-06-02 13:16:04.968202 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-02 13:16:04.968209 | orchestrator | Monday 02 June 2025 13:15:18 +0000 (0:00:40.886) 0:01:31.161 *********** 2025-06-02 13:16:04.968217 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:04.968225 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:16:04.968232 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:04.968240 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:16:04.968248 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:04.968255 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.968263 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:16:04.968271 | orchestrator | 2025-06-02 13:16:04.968279 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-02 13:16:04.968300 | orchestrator | Monday 02 June 2025 13:15:51 +0000 (0:00:33.138) 0:02:04.300 *********** 2025-06-02 13:16:04.968308 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:16:04.968316 | orchestrator | ok: [testbed-manager] 2025-06-02 13:16:04.968324 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:16:04.968331 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:16:04.968339 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:16:04.968347 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:16:04.968355 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:16:04.968362 | orchestrator | 2025-06-02 13:16:04.968370 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-02 13:16:04.968383 | orchestrator | Monday 02 June 2025 13:15:53 +0000 (0:00:02.127) 0:02:06.428 *********** 2025-06-02 13:16:04.968391 | orchestrator | changed: [testbed-manager] 2025-06-02 13:16:04.968398 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:04.968406 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:04.968414 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:16:04.968422 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:04.968429 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:16:04.968437 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:16:04.968445 | orchestrator | 2025-06-02 13:16:04.968452 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:16:04.968461 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 13:16:04.968470 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 13:16:04.968478 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 13:16:04.968490 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 13:16:04.968499 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 13:16:04.968506 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 13:16:04.968514 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 13:16:04.968522 | orchestrator | 2025-06-02 13:16:04.968530 | orchestrator | 2025-06-02 13:16:04.968538 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:16:04.968546 | orchestrator | Monday 02 June 2025 13:16:02 +0000 (0:00:09.436) 0:02:15.865 *********** 2025-06-02 13:16:04.968554 | orchestrator | =============================================================================== 2025-06-02 13:16:04.968562 | orchestrator | common : Restart fluentd container ------------------------------------- 40.89s 2025-06-02 13:16:04.968569 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.14s 2025-06-02 13:16:04.968577 | orchestrator | common : Restart cron container ----------------------------------------- 9.44s 2025-06-02 13:16:04.968585 | orchestrator | common : Copying over config.json files for services -------------------- 5.46s 2025-06-02 13:16:04.968593 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.85s 2025-06-02 13:16:04.968600 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.34s 2025-06-02 13:16:04.968608 | orchestrator | common : Check common containers ---------------------------------------- 3.49s 2025-06-02 13:16:04.968616 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.28s 2025-06-02 13:16:04.968624 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.83s 2025-06-02 13:16:04.968635 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.66s 2025-06-02 13:16:04.968643 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.56s 2025-06-02 13:16:04.968651 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.20s 2025-06-02 13:16:04.968658 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.13s 2025-06-02 13:16:04.968666 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.09s 2025-06-02 13:16:04.968674 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.01s 2025-06-02 13:16:04.968687 | orchestrator | common : Creating log volume -------------------------------------------- 1.63s 2025-06-02 13:16:04.968694 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.52s 2025-06-02 13:16:04.968702 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.38s 2025-06-02 13:16:04.968710 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.33s 2025-06-02 13:16:04.968718 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.27s 2025-06-02 13:16:04.968726 | orchestrator | 2025-06-02 13:16:04 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:04.968733 | orchestrator | 2025-06-02 13:16:04 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state STARTED 2025-06-02 13:16:04.968741 | orchestrator | 2025-06-02 13:16:04 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:04.968853 | orchestrator | 2025-06-02 13:16:04 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:04.968866 | orchestrator | 2025-06-02 13:16:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:08.023097 | orchestrator | 2025-06-02 13:16:08 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:08.024973 | orchestrator | 2025-06-02 13:16:08 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:08.029887 | orchestrator | 2025-06-02 13:16:08 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:08.030521 | orchestrator | 2025-06-02 13:16:08 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state STARTED 2025-06-02 13:16:08.031033 | orchestrator | 2025-06-02 13:16:08 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:08.035757 | orchestrator | 2025-06-02 13:16:08 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:08.035787 | orchestrator | 2025-06-02 13:16:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:11.070564 | orchestrator | 2025-06-02 13:16:11 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:11.070763 | orchestrator | 2025-06-02 13:16:11 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:11.071230 | orchestrator | 2025-06-02 13:16:11 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:11.072517 | orchestrator | 2025-06-02 13:16:11 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state STARTED 2025-06-02 13:16:11.072542 | orchestrator | 2025-06-02 13:16:11 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:11.073121 | orchestrator | 2025-06-02 13:16:11 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:11.073142 | orchestrator | 2025-06-02 13:16:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:14.100872 | orchestrator | 2025-06-02 13:16:14 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:14.101039 | orchestrator | 2025-06-02 13:16:14 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:14.101440 | orchestrator | 2025-06-02 13:16:14 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:14.102192 | orchestrator | 2025-06-02 13:16:14 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state STARTED 2025-06-02 13:16:14.103802 | orchestrator | 2025-06-02 13:16:14 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:14.105065 | orchestrator | 2025-06-02 13:16:14 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:14.105117 | orchestrator | 2025-06-02 13:16:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:17.144704 | orchestrator | 2025-06-02 13:16:17 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:17.144793 | orchestrator | 2025-06-02 13:16:17 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:17.144824 | orchestrator | 2025-06-02 13:16:17 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:17.145169 | orchestrator | 2025-06-02 13:16:17 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state STARTED 2025-06-02 13:16:17.145881 | orchestrator | 2025-06-02 13:16:17 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:17.146640 | orchestrator | 2025-06-02 13:16:17 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:17.146840 | orchestrator | 2025-06-02 13:16:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:20.181268 | orchestrator | 2025-06-02 13:16:20 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:20.183763 | orchestrator | 2025-06-02 13:16:20 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:20.184719 | orchestrator | 2025-06-02 13:16:20 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:20.185901 | orchestrator | 2025-06-02 13:16:20 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state STARTED 2025-06-02 13:16:20.187932 | orchestrator | 2025-06-02 13:16:20 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:20.188815 | orchestrator | 2025-06-02 13:16:20 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:20.189134 | orchestrator | 2025-06-02 13:16:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:23.236073 | orchestrator | 2025-06-02 13:16:23 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:23.236487 | orchestrator | 2025-06-02 13:16:23 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:23.237398 | orchestrator | 2025-06-02 13:16:23 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:23.237910 | orchestrator | 2025-06-02 13:16:23 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state STARTED 2025-06-02 13:16:23.238744 | orchestrator | 2025-06-02 13:16:23 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:23.241019 | orchestrator | 2025-06-02 13:16:23 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:23.241843 | orchestrator | 2025-06-02 13:16:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:26.282896 | orchestrator | 2025-06-02 13:16:26 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:26.283098 | orchestrator | 2025-06-02 13:16:26 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:26.283834 | orchestrator | 2025-06-02 13:16:26 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:26.284326 | orchestrator | 2025-06-02 13:16:26 | INFO  | Task 314a5a63-2639-4ed5-ac43-572aac35cc37 is in state SUCCESS 2025-06-02 13:16:26.286149 | orchestrator | 2025-06-02 13:16:26 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:26.286776 | orchestrator | 2025-06-02 13:16:26 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:26.288937 | orchestrator | 2025-06-02 13:16:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:29.343627 | orchestrator | 2025-06-02 13:16:29 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:29.343765 | orchestrator | 2025-06-02 13:16:29 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:29.343783 | orchestrator | 2025-06-02 13:16:29 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:29.343794 | orchestrator | 2025-06-02 13:16:29 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:29.343806 | orchestrator | 2025-06-02 13:16:29 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:29.343894 | orchestrator | 2025-06-02 13:16:29 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:29.343911 | orchestrator | 2025-06-02 13:16:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:32.370307 | orchestrator | 2025-06-02 13:16:32 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state STARTED 2025-06-02 13:16:32.370580 | orchestrator | 2025-06-02 13:16:32 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:32.371068 | orchestrator | 2025-06-02 13:16:32 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:32.374687 | orchestrator | 2025-06-02 13:16:32 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:32.374723 | orchestrator | 2025-06-02 13:16:32 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:32.374958 | orchestrator | 2025-06-02 13:16:32 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:32.375036 | orchestrator | 2025-06-02 13:16:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:35.411194 | orchestrator | 2025-06-02 13:16:35 | INFO  | Task f977f9b5-067a-4a70-8fe4-e32c63e068b8 is in state SUCCESS 2025-06-02 13:16:35.411290 | orchestrator | 2025-06-02 13:16:35 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:35.411306 | orchestrator | 2025-06-02 13:16:35 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:35.411972 | orchestrator | 2025-06-02 13:16:35 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:35.411998 | orchestrator | 2025-06-02 13:16:35 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:35.415698 | orchestrator | 2025-06-02 13:16:35.415741 | orchestrator | 2025-06-02 13:16:35.415753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:16:35.415765 | orchestrator | 2025-06-02 13:16:35.415776 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:16:35.415787 | orchestrator | Monday 02 June 2025 13:16:09 +0000 (0:00:00.304) 0:00:00.304 *********** 2025-06-02 13:16:35.415799 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:16:35.415810 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:16:35.415821 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:16:35.415832 | orchestrator | 2025-06-02 13:16:35.415843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:16:35.415854 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.622) 0:00:00.927 *********** 2025-06-02 13:16:35.415866 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-02 13:16:35.415877 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-02 13:16:35.415888 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-02 13:16:35.415952 | orchestrator | 2025-06-02 13:16:35.415965 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-02 13:16:35.415977 | orchestrator | 2025-06-02 13:16:35.415988 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-02 13:16:35.415999 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.758) 0:00:01.685 *********** 2025-06-02 13:16:35.416010 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:16:35.416021 | orchestrator | 2025-06-02 13:16:35.416032 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-02 13:16:35.416043 | orchestrator | Monday 02 June 2025 13:16:12 +0000 (0:00:00.764) 0:00:02.450 *********** 2025-06-02 13:16:35.416054 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 13:16:35.416065 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 13:16:35.416075 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 13:16:35.416086 | orchestrator | 2025-06-02 13:16:35.416097 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-02 13:16:35.416108 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:01.043) 0:00:03.494 *********** 2025-06-02 13:16:35.416118 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 13:16:35.416129 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 13:16:35.416140 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 13:16:35.416151 | orchestrator | 2025-06-02 13:16:35.416161 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-02 13:16:35.416172 | orchestrator | Monday 02 June 2025 13:16:15 +0000 (0:00:02.605) 0:00:06.100 *********** 2025-06-02 13:16:35.416183 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:35.416194 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:35.416204 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:35.416215 | orchestrator | 2025-06-02 13:16:35.416226 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-02 13:16:35.416237 | orchestrator | Monday 02 June 2025 13:16:17 +0000 (0:00:02.090) 0:00:08.190 *********** 2025-06-02 13:16:35.416247 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:35.416258 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:35.416269 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:35.416279 | orchestrator | 2025-06-02 13:16:35.416290 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:16:35.416303 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:16:35.416317 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:16:35.416329 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:16:35.416341 | orchestrator | 2025-06-02 13:16:35.416353 | orchestrator | 2025-06-02 13:16:35.416433 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:16:35.416460 | orchestrator | Monday 02 June 2025 13:16:25 +0000 (0:00:07.243) 0:00:15.434 *********** 2025-06-02 13:16:35.416473 | orchestrator | =============================================================================== 2025-06-02 13:16:35.416486 | orchestrator | memcached : Restart memcached container --------------------------------- 7.24s 2025-06-02 13:16:35.416498 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.61s 2025-06-02 13:16:35.416511 | orchestrator | memcached : Check memcached container ----------------------------------- 2.09s 2025-06-02 13:16:35.416523 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.04s 2025-06-02 13:16:35.416535 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.76s 2025-06-02 13:16:35.416559 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2025-06-02 13:16:35.416571 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-06-02 13:16:35.416584 | orchestrator | 2025-06-02 13:16:35.416596 | orchestrator | 2025-06-02 13:16:35.416608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:16:35.416620 | orchestrator | 2025-06-02 13:16:35.416632 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:16:35.416645 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.352) 0:00:00.352 *********** 2025-06-02 13:16:35.416657 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:16:35.416669 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:16:35.416679 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:16:35.416690 | orchestrator | 2025-06-02 13:16:35.416701 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:16:35.416725 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.245) 0:00:00.598 *********** 2025-06-02 13:16:35.416737 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-02 13:16:35.416748 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-02 13:16:35.416759 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-02 13:16:35.416769 | orchestrator | 2025-06-02 13:16:35.416780 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-02 13:16:35.416791 | orchestrator | 2025-06-02 13:16:35.416802 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-02 13:16:35.416813 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.758) 0:00:01.356 *********** 2025-06-02 13:16:35.416824 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:16:35.416835 | orchestrator | 2025-06-02 13:16:35.416845 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-02 13:16:35.416856 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.697) 0:00:02.054 *********** 2025-06-02 13:16:35.416870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.416887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.416899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.416916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.416941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.416970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.416991 | orchestrator | 2025-06-02 13:16:35.417011 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-02 13:16:35.417023 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:01.617) 0:00:03.672 *********** 2025-06-02 13:16:35.417034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417149 | orchestrator | 2025-06-02 13:16:35.417160 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-02 13:16:35.417171 | orchestrator | Monday 02 June 2025 13:16:16 +0000 (0:00:03.297) 0:00:06.969 *********** 2025-06-02 13:16:35.417183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417277 | orchestrator | 2025-06-02 13:16:35.417288 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-02 13:16:35.417302 | orchestrator | Monday 02 June 2025 13:16:19 +0000 (0:00:02.797) 0:00:09.767 *********** 2025-06-02 13:16:35.417320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 13:16:35.417431 | orchestrator | 2025-06-02 13:16:35.417442 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 13:16:35.417453 | orchestrator | Monday 02 June 2025 13:16:21 +0000 (0:00:01.693) 0:00:11.460 *********** 2025-06-02 13:16:35.417464 | orchestrator | 2025-06-02 13:16:35.417475 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 13:16:35.417485 | orchestrator | Monday 02 June 2025 13:16:21 +0000 (0:00:00.156) 0:00:11.617 *********** 2025-06-02 13:16:35.417496 | orchestrator | 2025-06-02 13:16:35.417506 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 13:16:35.417517 | orchestrator | Monday 02 June 2025 13:16:21 +0000 (0:00:00.082) 0:00:11.700 *********** 2025-06-02 13:16:35.417527 | orchestrator | 2025-06-02 13:16:35.417538 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-02 13:16:35.417549 | orchestrator | Monday 02 June 2025 13:16:21 +0000 (0:00:00.092) 0:00:11.793 *********** 2025-06-02 13:16:35.417559 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:35.417570 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:35.417581 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:35.417591 | orchestrator | 2025-06-02 13:16:35.417602 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-02 13:16:35.417612 | orchestrator | Monday 02 June 2025 13:16:25 +0000 (0:00:04.140) 0:00:15.933 *********** 2025-06-02 13:16:35.417623 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:16:35.417634 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:16:35.417644 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:16:35.417661 | orchestrator | 2025-06-02 13:16:35.417672 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:16:35.417683 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:16:35.417694 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:16:35.417704 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:16:35.417715 | orchestrator | 2025-06-02 13:16:35.417726 | orchestrator | 2025-06-02 13:16:35.417736 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:16:35.417747 | orchestrator | Monday 02 June 2025 13:16:34 +0000 (0:00:08.883) 0:00:24.816 *********** 2025-06-02 13:16:35.417758 | orchestrator | =============================================================================== 2025-06-02 13:16:35.417768 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.88s 2025-06-02 13:16:35.417779 | orchestrator | redis : Restart redis container ----------------------------------------- 4.14s 2025-06-02 13:16:35.417789 | orchestrator | redis : Copying over default config.json files -------------------------- 3.30s 2025-06-02 13:16:35.417800 | orchestrator | redis : Copying over redis config files --------------------------------- 2.80s 2025-06-02 13:16:35.417810 | orchestrator | redis : Check redis containers ------------------------------------------ 1.69s 2025-06-02 13:16:35.417821 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.62s 2025-06-02 13:16:35.417832 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2025-06-02 13:16:35.417842 | orchestrator | redis : include_tasks --------------------------------------------------- 0.70s 2025-06-02 13:16:35.417853 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.33s 2025-06-02 13:16:35.417864 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-06-02 13:16:35.417965 | orchestrator | 2025-06-02 13:16:35 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:35.417980 | orchestrator | 2025-06-02 13:16:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:38.450351 | orchestrator | 2025-06-02 13:16:38 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:38.452184 | orchestrator | 2025-06-02 13:16:38 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:38.455082 | orchestrator | 2025-06-02 13:16:38 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:38.457212 | orchestrator | 2025-06-02 13:16:38 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:38.459307 | orchestrator | 2025-06-02 13:16:38 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:38.459566 | orchestrator | 2025-06-02 13:16:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:41.493104 | orchestrator | 2025-06-02 13:16:41 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:41.494433 | orchestrator | 2025-06-02 13:16:41 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:41.495871 | orchestrator | 2025-06-02 13:16:41 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:41.497058 | orchestrator | 2025-06-02 13:16:41 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:41.498435 | orchestrator | 2025-06-02 13:16:41 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:41.498753 | orchestrator | 2025-06-02 13:16:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:44.528920 | orchestrator | 2025-06-02 13:16:44 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:44.529598 | orchestrator | 2025-06-02 13:16:44 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:44.529689 | orchestrator | 2025-06-02 13:16:44 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:44.530599 | orchestrator | 2025-06-02 13:16:44 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:44.531194 | orchestrator | 2025-06-02 13:16:44 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:44.531218 | orchestrator | 2025-06-02 13:16:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:47.576781 | orchestrator | 2025-06-02 13:16:47 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:47.576989 | orchestrator | 2025-06-02 13:16:47 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:47.577346 | orchestrator | 2025-06-02 13:16:47 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:47.578101 | orchestrator | 2025-06-02 13:16:47 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:47.580086 | orchestrator | 2025-06-02 13:16:47 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:47.580113 | orchestrator | 2025-06-02 13:16:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:50.612695 | orchestrator | 2025-06-02 13:16:50 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:50.612885 | orchestrator | 2025-06-02 13:16:50 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:50.614089 | orchestrator | 2025-06-02 13:16:50 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:50.614704 | orchestrator | 2025-06-02 13:16:50 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:50.615754 | orchestrator | 2025-06-02 13:16:50 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:50.617747 | orchestrator | 2025-06-02 13:16:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:53.656521 | orchestrator | 2025-06-02 13:16:53 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:53.657265 | orchestrator | 2025-06-02 13:16:53 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:53.658910 | orchestrator | 2025-06-02 13:16:53 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:53.660878 | orchestrator | 2025-06-02 13:16:53 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:53.661961 | orchestrator | 2025-06-02 13:16:53 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:53.662618 | orchestrator | 2025-06-02 13:16:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:56.699796 | orchestrator | 2025-06-02 13:16:56 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:56.702251 | orchestrator | 2025-06-02 13:16:56 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:56.702284 | orchestrator | 2025-06-02 13:16:56 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:56.702297 | orchestrator | 2025-06-02 13:16:56 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:56.703338 | orchestrator | 2025-06-02 13:16:56 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:56.703460 | orchestrator | 2025-06-02 13:16:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:16:59.736351 | orchestrator | 2025-06-02 13:16:59 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:16:59.736637 | orchestrator | 2025-06-02 13:16:59 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:16:59.737503 | orchestrator | 2025-06-02 13:16:59 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:16:59.738286 | orchestrator | 2025-06-02 13:16:59 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:16:59.739257 | orchestrator | 2025-06-02 13:16:59 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:16:59.739531 | orchestrator | 2025-06-02 13:16:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:02.770751 | orchestrator | 2025-06-02 13:17:02 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:02.773053 | orchestrator | 2025-06-02 13:17:02 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:02.775456 | orchestrator | 2025-06-02 13:17:02 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:02.777323 | orchestrator | 2025-06-02 13:17:02 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:02.779374 | orchestrator | 2025-06-02 13:17:02 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:17:02.779409 | orchestrator | 2025-06-02 13:17:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:05.823998 | orchestrator | 2025-06-02 13:17:05 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:05.824176 | orchestrator | 2025-06-02 13:17:05 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:05.824773 | orchestrator | 2025-06-02 13:17:05 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:05.831115 | orchestrator | 2025-06-02 13:17:05 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:05.831164 | orchestrator | 2025-06-02 13:17:05 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:17:05.831178 | orchestrator | 2025-06-02 13:17:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:08.860280 | orchestrator | 2025-06-02 13:17:08 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:08.864951 | orchestrator | 2025-06-02 13:17:08 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:08.866871 | orchestrator | 2025-06-02 13:17:08 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:08.870090 | orchestrator | 2025-06-02 13:17:08 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:08.872134 | orchestrator | 2025-06-02 13:17:08 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state STARTED 2025-06-02 13:17:08.872157 | orchestrator | 2025-06-02 13:17:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:11.907028 | orchestrator | 2025-06-02 13:17:11 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:11.908395 | orchestrator | 2025-06-02 13:17:11 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:11.910276 | orchestrator | 2025-06-02 13:17:11 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:11.914901 | orchestrator | 2025-06-02 13:17:11 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:11.916510 | orchestrator | 2025-06-02 13:17:11 | INFO  | Task 1e09617a-aeba-4550-ba00-98fe0317b83a is in state SUCCESS 2025-06-02 13:17:11.918908 | orchestrator | 2025-06-02 13:17:11.918948 | orchestrator | 2025-06-02 13:17:11.918961 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:17:11.918977 | orchestrator | 2025-06-02 13:17:11.918988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:17:11.919000 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.470) 0:00:00.470 *********** 2025-06-02 13:17:11.919011 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:17:11.919022 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:17:11.919033 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:17:11.919043 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:17:11.919054 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:17:11.919064 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:17:11.919075 | orchestrator | 2025-06-02 13:17:11.919086 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:17:11.919096 | orchestrator | Monday 02 June 2025 13:16:12 +0000 (0:00:01.455) 0:00:01.925 *********** 2025-06-02 13:17:11.919107 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 13:17:11.919118 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 13:17:11.919129 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 13:17:11.919139 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 13:17:11.919150 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 13:17:11.919161 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 13:17:11.919171 | orchestrator | 2025-06-02 13:17:11.919182 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-02 13:17:11.919192 | orchestrator | 2025-06-02 13:17:11.919203 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-02 13:17:11.919214 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:01.137) 0:00:03.063 *********** 2025-06-02 13:17:11.919225 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:17:11.919237 | orchestrator | 2025-06-02 13:17:11.919248 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 13:17:11.919258 | orchestrator | Monday 02 June 2025 13:16:15 +0000 (0:00:02.118) 0:00:05.181 *********** 2025-06-02 13:17:11.919269 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 13:17:11.919280 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 13:17:11.919291 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 13:17:11.919301 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 13:17:11.919312 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 13:17:11.919323 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 13:17:11.919334 | orchestrator | 2025-06-02 13:17:11.919345 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 13:17:11.919355 | orchestrator | Monday 02 June 2025 13:16:17 +0000 (0:00:01.703) 0:00:06.884 *********** 2025-06-02 13:17:11.919366 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 13:17:11.919377 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 13:17:11.919387 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 13:17:11.919411 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 13:17:11.919422 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 13:17:11.919433 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 13:17:11.919471 | orchestrator | 2025-06-02 13:17:11.919484 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 13:17:11.919495 | orchestrator | Monday 02 June 2025 13:16:19 +0000 (0:00:02.101) 0:00:08.986 *********** 2025-06-02 13:17:11.919509 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-02 13:17:11.919522 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:17:11.919534 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-02 13:17:11.919547 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:17:11.919559 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-02 13:17:11.919571 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:17:11.919584 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-02 13:17:11.919596 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:17:11.919608 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-02 13:17:11.919620 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:17:11.919632 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-02 13:17:11.919644 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:17:11.919657 | orchestrator | 2025-06-02 13:17:11.919669 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-02 13:17:11.919681 | orchestrator | Monday 02 June 2025 13:16:20 +0000 (0:00:01.080) 0:00:10.067 *********** 2025-06-02 13:17:11.919693 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:17:11.919705 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:17:11.919718 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:17:11.919731 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:17:11.919743 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:17:11.919755 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:17:11.919768 | orchestrator | 2025-06-02 13:17:11.919781 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-02 13:17:11.919793 | orchestrator | Monday 02 June 2025 13:16:21 +0000 (0:00:00.667) 0:00:10.735 *********** 2025-06-02 13:17:11.919829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919937 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.919997 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920009 | orchestrator | 2025-06-02 13:17:11.920020 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-02 13:17:11.920031 | orchestrator | Monday 02 June 2025 13:16:23 +0000 (0:00:02.320) 0:00:13.055 *********** 2025-06-02 13:17:11.920043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920246 | orchestrator | 2025-06-02 13:17:11.920258 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-02 13:17:11.920269 | orchestrator | Monday 02 June 2025 13:16:27 +0000 (0:00:03.250) 0:00:16.306 *********** 2025-06-02 13:17:11.920280 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:17:11.920291 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:17:11.920301 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:17:11.920312 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:17:11.920322 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:17:11.920333 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:17:11.920344 | orchestrator | 2025-06-02 13:17:11.920354 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-02 13:17:11.920365 | orchestrator | Monday 02 June 2025 13:16:28 +0000 (0:00:01.652) 0:00:17.958 *********** 2025-06-02 13:17:11.920376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 13:17:11.920586 | orchestrator | 2025-06-02 13:17:11.920597 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 13:17:11.920608 | orchestrator | Monday 02 June 2025 13:16:31 +0000 (0:00:02.773) 0:00:20.731 *********** 2025-06-02 13:17:11.920619 | orchestrator | 2025-06-02 13:17:11.920630 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 13:17:11.920641 | orchestrator | Monday 02 June 2025 13:16:31 +0000 (0:00:00.244) 0:00:20.976 *********** 2025-06-02 13:17:11.920651 | orchestrator | 2025-06-02 13:17:11.920662 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 13:17:11.920672 | orchestrator | Monday 02 June 2025 13:16:31 +0000 (0:00:00.154) 0:00:21.131 *********** 2025-06-02 13:17:11.920683 | orchestrator | 2025-06-02 13:17:11.920693 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 13:17:11.920704 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.135) 0:00:21.266 *********** 2025-06-02 13:17:11.920714 | orchestrator | 2025-06-02 13:17:11.920725 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 13:17:11.920736 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.124) 0:00:21.390 *********** 2025-06-02 13:17:11.920746 | orchestrator | 2025-06-02 13:17:11.920757 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 13:17:11.920767 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.123) 0:00:21.514 *********** 2025-06-02 13:17:11.920778 | orchestrator | 2025-06-02 13:17:11.920788 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-02 13:17:11.920799 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.404) 0:00:21.918 *********** 2025-06-02 13:17:11.920809 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:17:11.920820 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:17:11.920830 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:17:11.920841 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:17:11.920851 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:17:11.920862 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:17:11.920873 | orchestrator | 2025-06-02 13:17:11.920883 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-02 13:17:11.920894 | orchestrator | Monday 02 June 2025 13:16:43 +0000 (0:00:10.989) 0:00:32.907 *********** 2025-06-02 13:17:11.920910 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:17:11.920921 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:17:11.920932 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:17:11.920942 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:17:11.920953 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:17:11.920963 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:17:11.920973 | orchestrator | 2025-06-02 13:17:11.920984 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 13:17:11.920995 | orchestrator | Monday 02 June 2025 13:16:45 +0000 (0:00:01.719) 0:00:34.627 *********** 2025-06-02 13:17:11.921005 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:17:11.921016 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:17:11.921026 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:17:11.921037 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:17:11.921047 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:17:11.921058 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:17:11.921069 | orchestrator | 2025-06-02 13:17:11.921079 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-02 13:17:11.921094 | orchestrator | Monday 02 June 2025 13:16:50 +0000 (0:00:04.904) 0:00:39.532 *********** 2025-06-02 13:17:11.921110 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-02 13:17:11.921122 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-02 13:17:11.921133 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-02 13:17:11.921144 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-02 13:17:11.921154 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-02 13:17:11.921165 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-02 13:17:11.921176 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-02 13:17:11.921186 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-02 13:17:11.921197 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-02 13:17:11.921207 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-02 13:17:11.921218 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-02 13:17:11.921228 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-02 13:17:11.921239 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 13:17:11.921249 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 13:17:11.921260 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 13:17:11.921270 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 13:17:11.921281 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 13:17:11.921292 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 13:17:11.921302 | orchestrator | 2025-06-02 13:17:11.921313 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-02 13:17:11.921329 | orchestrator | Monday 02 June 2025 13:16:57 +0000 (0:00:07.045) 0:00:46.578 *********** 2025-06-02 13:17:11.921340 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-02 13:17:11.921351 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:17:11.921362 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-02 13:17:11.921372 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:17:11.921383 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-02 13:17:11.921394 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:17:11.921404 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-02 13:17:11.921415 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-02 13:17:11.921425 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-02 13:17:11.921436 | orchestrator | 2025-06-02 13:17:11.921499 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-02 13:17:11.921512 | orchestrator | Monday 02 June 2025 13:16:59 +0000 (0:00:02.145) 0:00:48.724 *********** 2025-06-02 13:17:11.921523 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-02 13:17:11.921533 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:17:11.921543 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-02 13:17:11.921552 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:17:11.921562 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-02 13:17:11.921571 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:17:11.921580 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-02 13:17:11.921590 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-02 13:17:11.921599 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-02 13:17:11.921608 | orchestrator | 2025-06-02 13:17:11.921618 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 13:17:11.921627 | orchestrator | Monday 02 June 2025 13:17:03 +0000 (0:00:03.498) 0:00:52.223 *********** 2025-06-02 13:17:11.921637 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:17:11.921646 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:17:11.921655 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:17:11.921665 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:17:11.921674 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:17:11.921683 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:17:11.921693 | orchestrator | 2025-06-02 13:17:11.921702 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:17:11.921712 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 13:17:11.921732 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 13:17:11.921743 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 13:17:11.921752 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:17:11.921762 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:17:11.921772 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:17:11.921781 | orchestrator | 2025-06-02 13:17:11.921791 | orchestrator | 2025-06-02 13:17:11.921800 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:17:11.921810 | orchestrator | Monday 02 June 2025 13:17:11 +0000 (0:00:08.113) 0:01:00.337 *********** 2025-06-02 13:17:11.921819 | orchestrator | =============================================================================== 2025-06-02 13:17:11.921834 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.02s 2025-06-02 13:17:11.921844 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.99s 2025-06-02 13:17:11.921853 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.05s 2025-06-02 13:17:11.921863 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.50s 2025-06-02 13:17:11.921872 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.25s 2025-06-02 13:17:11.921882 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.77s 2025-06-02 13:17:11.921891 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.32s 2025-06-02 13:17:11.921901 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.15s 2025-06-02 13:17:11.921910 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.12s 2025-06-02 13:17:11.921920 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.10s 2025-06-02 13:17:11.921929 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.72s 2025-06-02 13:17:11.921938 | orchestrator | module-load : Load modules ---------------------------------------------- 1.70s 2025-06-02 13:17:11.921948 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.65s 2025-06-02 13:17:11.921957 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.46s 2025-06-02 13:17:11.921967 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.19s 2025-06-02 13:17:11.921976 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2025-06-02 13:17:11.921985 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.08s 2025-06-02 13:17:11.921995 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.67s 2025-06-02 13:17:11.922093 | orchestrator | 2025-06-02 13:17:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:14.948313 | orchestrator | 2025-06-02 13:17:14 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:14.949089 | orchestrator | 2025-06-02 13:17:14 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:14.952406 | orchestrator | 2025-06-02 13:17:14 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:14.953210 | orchestrator | 2025-06-02 13:17:14 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:14.954207 | orchestrator | 2025-06-02 13:17:14 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:14.954234 | orchestrator | 2025-06-02 13:17:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:17.979055 | orchestrator | 2025-06-02 13:17:17 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:17.980651 | orchestrator | 2025-06-02 13:17:17 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:17.981144 | orchestrator | 2025-06-02 13:17:17 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:17.982154 | orchestrator | 2025-06-02 13:17:17 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:17.984415 | orchestrator | 2025-06-02 13:17:17 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:17.984447 | orchestrator | 2025-06-02 13:17:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:21.012010 | orchestrator | 2025-06-02 13:17:21 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:21.013742 | orchestrator | 2025-06-02 13:17:21 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:21.013805 | orchestrator | 2025-06-02 13:17:21 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:21.015902 | orchestrator | 2025-06-02 13:17:21 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:21.016295 | orchestrator | 2025-06-02 13:17:21 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:21.016329 | orchestrator | 2025-06-02 13:17:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:24.055801 | orchestrator | 2025-06-02 13:17:24 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:24.058779 | orchestrator | 2025-06-02 13:17:24 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:24.059042 | orchestrator | 2025-06-02 13:17:24 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:24.061401 | orchestrator | 2025-06-02 13:17:24 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:24.062372 | orchestrator | 2025-06-02 13:17:24 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:24.062585 | orchestrator | 2025-06-02 13:17:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:27.110612 | orchestrator | 2025-06-02 13:17:27 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:27.111814 | orchestrator | 2025-06-02 13:17:27 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:27.113910 | orchestrator | 2025-06-02 13:17:27 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:27.114922 | orchestrator | 2025-06-02 13:17:27 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:27.116796 | orchestrator | 2025-06-02 13:17:27 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:27.116828 | orchestrator | 2025-06-02 13:17:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:30.156803 | orchestrator | 2025-06-02 13:17:30 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:30.158865 | orchestrator | 2025-06-02 13:17:30 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:30.161184 | orchestrator | 2025-06-02 13:17:30 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:30.163926 | orchestrator | 2025-06-02 13:17:30 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:30.166583 | orchestrator | 2025-06-02 13:17:30 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:30.166745 | orchestrator | 2025-06-02 13:17:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:33.200472 | orchestrator | 2025-06-02 13:17:33 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:33.201232 | orchestrator | 2025-06-02 13:17:33 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:33.201282 | orchestrator | 2025-06-02 13:17:33 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:33.202191 | orchestrator | 2025-06-02 13:17:33 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:33.206168 | orchestrator | 2025-06-02 13:17:33 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:33.206202 | orchestrator | 2025-06-02 13:17:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:36.238900 | orchestrator | 2025-06-02 13:17:36 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:36.239102 | orchestrator | 2025-06-02 13:17:36 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:36.240050 | orchestrator | 2025-06-02 13:17:36 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:36.240580 | orchestrator | 2025-06-02 13:17:36 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:36.242369 | orchestrator | 2025-06-02 13:17:36 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:36.242438 | orchestrator | 2025-06-02 13:17:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:39.271742 | orchestrator | 2025-06-02 13:17:39 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:39.272435 | orchestrator | 2025-06-02 13:17:39 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:39.273486 | orchestrator | 2025-06-02 13:17:39 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:39.274719 | orchestrator | 2025-06-02 13:17:39 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:39.275739 | orchestrator | 2025-06-02 13:17:39 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:39.275763 | orchestrator | 2025-06-02 13:17:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:42.311856 | orchestrator | 2025-06-02 13:17:42 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:42.311955 | orchestrator | 2025-06-02 13:17:42 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:42.312754 | orchestrator | 2025-06-02 13:17:42 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:42.312781 | orchestrator | 2025-06-02 13:17:42 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:42.313441 | orchestrator | 2025-06-02 13:17:42 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:42.313463 | orchestrator | 2025-06-02 13:17:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:45.351505 | orchestrator | 2025-06-02 13:17:45 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:45.354883 | orchestrator | 2025-06-02 13:17:45 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:45.354929 | orchestrator | 2025-06-02 13:17:45 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:45.354941 | orchestrator | 2025-06-02 13:17:45 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:45.357809 | orchestrator | 2025-06-02 13:17:45 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:45.357833 | orchestrator | 2025-06-02 13:17:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:48.400130 | orchestrator | 2025-06-02 13:17:48 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:48.401779 | orchestrator | 2025-06-02 13:17:48 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:48.403961 | orchestrator | 2025-06-02 13:17:48 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:48.406178 | orchestrator | 2025-06-02 13:17:48 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:48.409060 | orchestrator | 2025-06-02 13:17:48 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:48.409089 | orchestrator | 2025-06-02 13:17:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:51.470620 | orchestrator | 2025-06-02 13:17:51 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:51.470703 | orchestrator | 2025-06-02 13:17:51 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:51.470709 | orchestrator | 2025-06-02 13:17:51 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:51.470713 | orchestrator | 2025-06-02 13:17:51 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:51.471335 | orchestrator | 2025-06-02 13:17:51 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:51.471617 | orchestrator | 2025-06-02 13:17:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:54.530192 | orchestrator | 2025-06-02 13:17:54 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:54.530832 | orchestrator | 2025-06-02 13:17:54 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:54.534580 | orchestrator | 2025-06-02 13:17:54 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:54.535393 | orchestrator | 2025-06-02 13:17:54 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:54.538343 | orchestrator | 2025-06-02 13:17:54 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:54.538394 | orchestrator | 2025-06-02 13:17:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:17:57.581368 | orchestrator | 2025-06-02 13:17:57 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:17:57.581703 | orchestrator | 2025-06-02 13:17:57 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:17:57.583761 | orchestrator | 2025-06-02 13:17:57 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:17:57.585521 | orchestrator | 2025-06-02 13:17:57 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:17:57.587258 | orchestrator | 2025-06-02 13:17:57 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:17:57.587765 | orchestrator | 2025-06-02 13:17:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:00.629848 | orchestrator | 2025-06-02 13:18:00 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:00.631582 | orchestrator | 2025-06-02 13:18:00 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:00.633865 | orchestrator | 2025-06-02 13:18:00 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:00.636229 | orchestrator | 2025-06-02 13:18:00 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:00.638376 | orchestrator | 2025-06-02 13:18:00 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:00.638729 | orchestrator | 2025-06-02 13:18:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:03.688011 | orchestrator | 2025-06-02 13:18:03 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:03.688119 | orchestrator | 2025-06-02 13:18:03 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:03.688705 | orchestrator | 2025-06-02 13:18:03 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:03.690378 | orchestrator | 2025-06-02 13:18:03 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:03.692841 | orchestrator | 2025-06-02 13:18:03 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:03.692866 | orchestrator | 2025-06-02 13:18:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:06.733036 | orchestrator | 2025-06-02 13:18:06 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:06.735198 | orchestrator | 2025-06-02 13:18:06 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:06.736792 | orchestrator | 2025-06-02 13:18:06 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:06.738658 | orchestrator | 2025-06-02 13:18:06 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:06.742819 | orchestrator | 2025-06-02 13:18:06 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:06.742879 | orchestrator | 2025-06-02 13:18:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:09.793542 | orchestrator | 2025-06-02 13:18:09 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:09.796777 | orchestrator | 2025-06-02 13:18:09 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:09.798418 | orchestrator | 2025-06-02 13:18:09 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:09.800353 | orchestrator | 2025-06-02 13:18:09 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:09.802886 | orchestrator | 2025-06-02 13:18:09 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:09.802913 | orchestrator | 2025-06-02 13:18:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:12.854656 | orchestrator | 2025-06-02 13:18:12 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:12.859524 | orchestrator | 2025-06-02 13:18:12 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:12.859560 | orchestrator | 2025-06-02 13:18:12 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:12.862434 | orchestrator | 2025-06-02 13:18:12 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:12.862948 | orchestrator | 2025-06-02 13:18:12 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:12.862977 | orchestrator | 2025-06-02 13:18:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:15.983153 | orchestrator | 2025-06-02 13:18:15 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:15.992248 | orchestrator | 2025-06-02 13:18:15 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:15.992730 | orchestrator | 2025-06-02 13:18:15 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:15.993556 | orchestrator | 2025-06-02 13:18:15 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:15.998324 | orchestrator | 2025-06-02 13:18:15 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:15.998355 | orchestrator | 2025-06-02 13:18:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:19.042101 | orchestrator | 2025-06-02 13:18:19 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:19.042302 | orchestrator | 2025-06-02 13:18:19 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:19.042736 | orchestrator | 2025-06-02 13:18:19 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:19.043581 | orchestrator | 2025-06-02 13:18:19 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:19.044380 | orchestrator | 2025-06-02 13:18:19 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:19.044799 | orchestrator | 2025-06-02 13:18:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:22.084127 | orchestrator | 2025-06-02 13:18:22 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:22.084277 | orchestrator | 2025-06-02 13:18:22 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state STARTED 2025-06-02 13:18:22.084739 | orchestrator | 2025-06-02 13:18:22 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:22.085265 | orchestrator | 2025-06-02 13:18:22 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:22.086808 | orchestrator | 2025-06-02 13:18:22 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:22.086836 | orchestrator | 2025-06-02 13:18:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:25.118364 | orchestrator | 2025-06-02 13:18:25 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:25.119334 | orchestrator | 2025-06-02 13:18:25 | INFO  | Task a31af1af-8385-4029-b2e2-5599ca1b1273 is in state SUCCESS 2025-06-02 13:18:25.120720 | orchestrator | 2025-06-02 13:18:25.120735 | orchestrator | 2025-06-02 13:18:25.120740 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-02 13:18:25.120745 | orchestrator | 2025-06-02 13:18:25.120749 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-02 13:18:25.120753 | orchestrator | Monday 02 June 2025 13:13:47 +0000 (0:00:00.206) 0:00:00.206 *********** 2025-06-02 13:18:25.120757 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:18:25.120762 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:18:25.120765 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:18:25.120769 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.120773 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.120777 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.120780 | orchestrator | 2025-06-02 13:18:25.120784 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-02 13:18:25.120788 | orchestrator | Monday 02 June 2025 13:13:48 +0000 (0:00:00.770) 0:00:00.976 *********** 2025-06-02 13:18:25.120792 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.120819 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.120827 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.120833 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.120839 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.120846 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.120850 | orchestrator | 2025-06-02 13:18:25.120854 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-02 13:18:25.120858 | orchestrator | Monday 02 June 2025 13:13:49 +0000 (0:00:00.685) 0:00:01.662 *********** 2025-06-02 13:18:25.120862 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.120865 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.120869 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.120873 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.120877 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.120881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.120895 | orchestrator | 2025-06-02 13:18:25.120899 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-02 13:18:25.120903 | orchestrator | Monday 02 June 2025 13:13:50 +0000 (0:00:00.844) 0:00:02.506 *********** 2025-06-02 13:18:25.120906 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:18:25.120910 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:18:25.120914 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:18:25.120917 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.120921 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.120925 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.120929 | orchestrator | 2025-06-02 13:18:25.120935 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-02 13:18:25.120939 | orchestrator | Monday 02 June 2025 13:13:51 +0000 (0:00:01.819) 0:00:04.326 *********** 2025-06-02 13:18:25.120943 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:18:25.120946 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:18:25.120950 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:18:25.120954 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.120957 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.120961 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.120964 | orchestrator | 2025-06-02 13:18:25.120968 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-02 13:18:25.120972 | orchestrator | Monday 02 June 2025 13:13:52 +0000 (0:00:01.057) 0:00:05.383 *********** 2025-06-02 13:18:25.120976 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:18:25.120979 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:18:25.120983 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:18:25.120987 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.120990 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.120994 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.120998 | orchestrator | 2025-06-02 13:18:25.121001 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-02 13:18:25.121005 | orchestrator | Monday 02 June 2025 13:13:53 +0000 (0:00:00.974) 0:00:06.358 *********** 2025-06-02 13:18:25.121009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121012 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121016 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121020 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121024 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121027 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121031 | orchestrator | 2025-06-02 13:18:25.121035 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-02 13:18:25.121038 | orchestrator | Monday 02 June 2025 13:13:54 +0000 (0:00:00.921) 0:00:07.279 *********** 2025-06-02 13:18:25.121042 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121046 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121049 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121053 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121057 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121060 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121064 | orchestrator | 2025-06-02 13:18:25.121068 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-02 13:18:25.121071 | orchestrator | Monday 02 June 2025 13:13:55 +0000 (0:00:00.601) 0:00:07.881 *********** 2025-06-02 13:18:25.121075 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:18:25.121079 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:18:25.121082 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121086 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:18:25.121090 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:18:25.121093 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121097 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:18:25.121104 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:18:25.121107 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121111 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:18:25.121120 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:18:25.121124 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121127 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:18:25.121131 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:18:25.121135 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121138 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:18:25.121142 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:18:25.121146 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121149 | orchestrator | 2025-06-02 13:18:25.121153 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-02 13:18:25.121157 | orchestrator | Monday 02 June 2025 13:13:56 +0000 (0:00:01.146) 0:00:09.028 *********** 2025-06-02 13:18:25.121160 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121164 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121168 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121171 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121175 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121179 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121182 | orchestrator | 2025-06-02 13:18:25.121186 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-02 13:18:25.121190 | orchestrator | Monday 02 June 2025 13:13:57 +0000 (0:00:01.377) 0:00:10.405 *********** 2025-06-02 13:18:25.121194 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:18:25.121197 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:18:25.121201 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:18:25.121205 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.121208 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.121212 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.121215 | orchestrator | 2025-06-02 13:18:25.121219 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-02 13:18:25.121223 | orchestrator | Monday 02 June 2025 13:13:58 +0000 (0:00:00.586) 0:00:10.992 *********** 2025-06-02 13:18:25.121227 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:18:25.121230 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:18:25.121234 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.121238 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.121241 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:18:25.121245 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.121249 | orchestrator | 2025-06-02 13:18:25.121254 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-02 13:18:25.121258 | orchestrator | Monday 02 June 2025 13:14:04 +0000 (0:00:05.980) 0:00:16.973 *********** 2025-06-02 13:18:25.121261 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121265 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121268 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121272 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121276 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121279 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121283 | orchestrator | 2025-06-02 13:18:25.121287 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-02 13:18:25.121290 | orchestrator | Monday 02 June 2025 13:14:05 +0000 (0:00:01.191) 0:00:18.164 *********** 2025-06-02 13:18:25.121294 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121298 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121304 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121308 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121311 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121315 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121319 | orchestrator | 2025-06-02 13:18:25.121322 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-02 13:18:25.121327 | orchestrator | Monday 02 June 2025 13:14:07 +0000 (0:00:01.693) 0:00:19.858 *********** 2025-06-02 13:18:25.121330 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121334 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121338 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121341 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121352 | orchestrator | 2025-06-02 13:18:25.121356 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-02 13:18:25.121360 | orchestrator | Monday 02 June 2025 13:14:08 +0000 (0:00:00.678) 0:00:20.537 *********** 2025-06-02 13:18:25.121363 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-02 13:18:25.121368 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-02 13:18:25.121371 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121375 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-02 13:18:25.121379 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-02 13:18:25.121382 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121386 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-02 13:18:25.121389 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-02 13:18:25.121393 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121397 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-02 13:18:25.121400 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-02 13:18:25.121404 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121408 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-02 13:18:25.121411 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-02 13:18:25.121415 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121418 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-02 13:18:25.121422 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-02 13:18:25.121426 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121429 | orchestrator | 2025-06-02 13:18:25.121433 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-02 13:18:25.121439 | orchestrator | Monday 02 June 2025 13:14:09 +0000 (0:00:01.038) 0:00:21.575 *********** 2025-06-02 13:18:25.121443 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.121446 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.121450 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.121453 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121457 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121461 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121464 | orchestrator | 2025-06-02 13:18:25.121468 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-02 13:18:25.121472 | orchestrator | 2025-06-02 13:18:25.121475 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-02 13:18:25.121479 | orchestrator | Monday 02 June 2025 13:14:10 +0000 (0:00:01.281) 0:00:22.857 *********** 2025-06-02 13:18:25.121483 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.121486 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.121490 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.121494 | orchestrator | 2025-06-02 13:18:25.121497 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-02 13:18:25.121503 | orchestrator | Monday 02 June 2025 13:14:11 +0000 (0:00:01.478) 0:00:24.335 *********** 2025-06-02 13:18:25.121507 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.121510 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.121514 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.121518 | orchestrator | 2025-06-02 13:18:25.121521 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-02 13:18:25.121525 | orchestrator | Monday 02 June 2025 13:14:13 +0000 (0:00:01.356) 0:00:25.692 *********** 2025-06-02 13:18:25.121529 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.121533 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.121536 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.121540 | orchestrator | 2025-06-02 13:18:25.121544 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-02 13:18:25.121547 | orchestrator | Monday 02 June 2025 13:14:14 +0000 (0:00:01.012) 0:00:26.704 *********** 2025-06-02 13:18:25.121551 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.121554 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.121558 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.121562 | orchestrator | 2025-06-02 13:18:25.121565 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-02 13:18:25.121569 | orchestrator | Monday 02 June 2025 13:14:14 +0000 (0:00:00.708) 0:00:27.412 *********** 2025-06-02 13:18:25.121573 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121576 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121585 | orchestrator | 2025-06-02 13:18:25.121589 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-02 13:18:25.121593 | orchestrator | Monday 02 June 2025 13:14:15 +0000 (0:00:00.465) 0:00:27.877 *********** 2025-06-02 13:18:25.121596 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:18:25.121611 | orchestrator | 2025-06-02 13:18:25.121615 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-02 13:18:25.121619 | orchestrator | Monday 02 June 2025 13:14:16 +0000 (0:00:00.653) 0:00:28.531 *********** 2025-06-02 13:18:25.121623 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.121626 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.121630 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.121634 | orchestrator | 2025-06-02 13:18:25.121638 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-02 13:18:25.121641 | orchestrator | Monday 02 June 2025 13:14:17 +0000 (0:00:01.862) 0:00:30.393 *********** 2025-06-02 13:18:25.121645 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121653 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.121656 | orchestrator | 2025-06-02 13:18:25.121660 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-02 13:18:25.121664 | orchestrator | Monday 02 June 2025 13:14:18 +0000 (0:00:00.763) 0:00:31.157 *********** 2025-06-02 13:18:25.121668 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121671 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121675 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.121679 | orchestrator | 2025-06-02 13:18:25.121683 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-02 13:18:25.121686 | orchestrator | Monday 02 June 2025 13:14:20 +0000 (0:00:01.439) 0:00:32.597 *********** 2025-06-02 13:18:25.121690 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121694 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121698 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.121701 | orchestrator | 2025-06-02 13:18:25.121705 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-02 13:18:25.121709 | orchestrator | Monday 02 June 2025 13:14:22 +0000 (0:00:02.016) 0:00:34.613 *********** 2025-06-02 13:18:25.121712 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121719 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121727 | orchestrator | 2025-06-02 13:18:25.121730 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-02 13:18:25.121734 | orchestrator | Monday 02 June 2025 13:14:22 +0000 (0:00:00.305) 0:00:34.919 *********** 2025-06-02 13:18:25.121738 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121741 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121745 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121749 | orchestrator | 2025-06-02 13:18:25.121753 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-02 13:18:25.121756 | orchestrator | Monday 02 June 2025 13:14:22 +0000 (0:00:00.354) 0:00:35.274 *********** 2025-06-02 13:18:25.121762 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.121768 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.121774 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.121779 | orchestrator | 2025-06-02 13:18:25.121785 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-02 13:18:25.121791 | orchestrator | Monday 02 June 2025 13:14:24 +0000 (0:00:02.148) 0:00:37.422 *********** 2025-06-02 13:18:25.121801 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 13:18:25.121809 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 13:18:25.121815 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 13:18:25.121820 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 13:18:25.121826 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 13:18:25.121832 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 13:18:25.121838 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 13:18:25.121845 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 13:18:25.121852 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 13:18:25.121858 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 13:18:25.121864 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 13:18:25.121870 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 13:18:25.121877 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 13:18:25.121890 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 13:18:25.121897 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 13:18:25.121904 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.121911 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.121919 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.121930 | orchestrator | 2025-06-02 13:18:25.121935 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-02 13:18:25.121939 | orchestrator | Monday 02 June 2025 13:15:20 +0000 (0:00:55.833) 0:01:33.256 *********** 2025-06-02 13:18:25.121943 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.121946 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.121950 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.121954 | orchestrator | 2025-06-02 13:18:25.121958 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-02 13:18:25.121961 | orchestrator | Monday 02 June 2025 13:15:21 +0000 (0:00:00.426) 0:01:33.682 *********** 2025-06-02 13:18:25.121965 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.121969 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.121973 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.121976 | orchestrator | 2025-06-02 13:18:25.121980 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-02 13:18:25.121984 | orchestrator | Monday 02 June 2025 13:15:22 +0000 (0:00:01.020) 0:01:34.703 *********** 2025-06-02 13:18:25.121988 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.121991 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.121995 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.121999 | orchestrator | 2025-06-02 13:18:25.122003 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-02 13:18:25.122006 | orchestrator | Monday 02 June 2025 13:15:23 +0000 (0:00:01.178) 0:01:35.881 *********** 2025-06-02 13:18:25.122010 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.122014 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.122058 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.122062 | orchestrator | 2025-06-02 13:18:25.122066 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-02 13:18:25.122069 | orchestrator | Monday 02 June 2025 13:15:39 +0000 (0:00:16.089) 0:01:51.971 *********** 2025-06-02 13:18:25.122073 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.122077 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.122080 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.122084 | orchestrator | 2025-06-02 13:18:25.122088 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-02 13:18:25.122092 | orchestrator | Monday 02 June 2025 13:15:40 +0000 (0:00:00.598) 0:01:52.569 *********** 2025-06-02 13:18:25.122095 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.122099 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.122103 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.122107 | orchestrator | 2025-06-02 13:18:25.122110 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-02 13:18:25.122114 | orchestrator | Monday 02 June 2025 13:15:40 +0000 (0:00:00.556) 0:01:53.125 *********** 2025-06-02 13:18:25.122118 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.122122 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.122125 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.122129 | orchestrator | 2025-06-02 13:18:25.122137 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-02 13:18:25.122141 | orchestrator | Monday 02 June 2025 13:15:41 +0000 (0:00:00.619) 0:01:53.745 *********** 2025-06-02 13:18:25.122145 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.122149 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.122152 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.122156 | orchestrator | 2025-06-02 13:18:25.122160 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-02 13:18:25.122164 | orchestrator | Monday 02 June 2025 13:15:42 +0000 (0:00:00.828) 0:01:54.573 *********** 2025-06-02 13:18:25.122167 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.122171 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.122175 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.122178 | orchestrator | 2025-06-02 13:18:25.122182 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-02 13:18:25.122189 | orchestrator | Monday 02 June 2025 13:15:42 +0000 (0:00:00.282) 0:01:54.855 *********** 2025-06-02 13:18:25.122193 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.122197 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.122200 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.122204 | orchestrator | 2025-06-02 13:18:25.122208 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-02 13:18:25.122211 | orchestrator | Monday 02 June 2025 13:15:43 +0000 (0:00:00.619) 0:01:55.475 *********** 2025-06-02 13:18:25.122215 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.122219 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.122223 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.122229 | orchestrator | 2025-06-02 13:18:25.122235 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-02 13:18:25.122241 | orchestrator | Monday 02 June 2025 13:15:43 +0000 (0:00:00.583) 0:01:56.059 *********** 2025-06-02 13:18:25.122247 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.122254 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.122258 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.122262 | orchestrator | 2025-06-02 13:18:25.122266 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-02 13:18:25.122272 | orchestrator | Monday 02 June 2025 13:15:44 +0000 (0:00:01.020) 0:01:57.080 *********** 2025-06-02 13:18:25.122278 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:25.122536 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:25.122544 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:25.122548 | orchestrator | 2025-06-02 13:18:25.122552 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-02 13:18:25.122556 | orchestrator | Monday 02 June 2025 13:15:45 +0000 (0:00:00.811) 0:01:57.891 *********** 2025-06-02 13:18:25.122560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.122563 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.122567 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.122571 | orchestrator | 2025-06-02 13:18:25.122574 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-02 13:18:25.122578 | orchestrator | Monday 02 June 2025 13:15:45 +0000 (0:00:00.256) 0:01:58.148 *********** 2025-06-02 13:18:25.122582 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.122586 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.122589 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.122593 | orchestrator | 2025-06-02 13:18:25.122597 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-02 13:18:25.122619 | orchestrator | Monday 02 June 2025 13:15:46 +0000 (0:00:00.277) 0:01:58.425 *********** 2025-06-02 13:18:25.122623 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.122627 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.122630 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.122634 | orchestrator | 2025-06-02 13:18:25.122638 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-02 13:18:25.122642 | orchestrator | Monday 02 June 2025 13:15:46 +0000 (0:00:00.929) 0:01:59.355 *********** 2025-06-02 13:18:25.122645 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.122649 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.122653 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.122656 | orchestrator | 2025-06-02 13:18:25.122660 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-02 13:18:25.122664 | orchestrator | Monday 02 June 2025 13:15:47 +0000 (0:00:00.581) 0:01:59.937 *********** 2025-06-02 13:18:25.122668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 13:18:25.122672 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 13:18:25.122676 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 13:18:25.122684 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 13:18:25.122688 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 13:18:25.122707 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 13:18:25.122711 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 13:18:25.122715 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 13:18:25.122719 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 13:18:25.122722 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-02 13:18:25.122726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 13:18:25.122730 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 13:18:25.122738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-02 13:18:25.122742 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 13:18:25.122745 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 13:18:25.122749 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 13:18:25.122753 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 13:18:25.122757 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 13:18:25.122760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 13:18:25.122764 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 13:18:25.122768 | orchestrator | 2025-06-02 13:18:25.122772 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-02 13:18:25.122775 | orchestrator | 2025-06-02 13:18:25.122781 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-02 13:18:25.122785 | orchestrator | Monday 02 June 2025 13:15:50 +0000 (0:00:02.913) 0:02:02.851 *********** 2025-06-02 13:18:25.122789 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:18:25.122793 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:18:25.122796 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:18:25.122800 | orchestrator | 2025-06-02 13:18:25.122804 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-02 13:18:25.122807 | orchestrator | Monday 02 June 2025 13:15:50 +0000 (0:00:00.490) 0:02:03.341 *********** 2025-06-02 13:18:25.122811 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:18:25.122815 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:18:25.122818 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:18:25.122822 | orchestrator | 2025-06-02 13:18:25.122826 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-02 13:18:25.122829 | orchestrator | Monday 02 June 2025 13:15:51 +0000 (0:00:00.607) 0:02:03.948 *********** 2025-06-02 13:18:25.122833 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:18:25.122837 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:18:25.122841 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:18:25.122844 | orchestrator | 2025-06-02 13:18:25.122848 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-02 13:18:25.122852 | orchestrator | Monday 02 June 2025 13:15:51 +0000 (0:00:00.380) 0:02:04.329 *********** 2025-06-02 13:18:25.122855 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:18:25.122859 | orchestrator | 2025-06-02 13:18:25.122863 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-02 13:18:25.122869 | orchestrator | Monday 02 June 2025 13:15:52 +0000 (0:00:00.649) 0:02:04.978 *********** 2025-06-02 13:18:25.122873 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.122877 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.122880 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.122884 | orchestrator | 2025-06-02 13:18:25.122888 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-02 13:18:25.122900 | orchestrator | Monday 02 June 2025 13:15:52 +0000 (0:00:00.314) 0:02:05.292 *********** 2025-06-02 13:18:25.122903 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.122907 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.122911 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.122915 | orchestrator | 2025-06-02 13:18:25.122918 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-02 13:18:25.122922 | orchestrator | Monday 02 June 2025 13:15:53 +0000 (0:00:00.308) 0:02:05.601 *********** 2025-06-02 13:18:25.122926 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.122929 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.122933 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.122937 | orchestrator | 2025-06-02 13:18:25.122940 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-02 13:18:25.122944 | orchestrator | Monday 02 June 2025 13:15:53 +0000 (0:00:00.290) 0:02:05.892 *********** 2025-06-02 13:18:25.122948 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:18:25.122952 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:18:25.122955 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:18:25.122959 | orchestrator | 2025-06-02 13:18:25.122963 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-02 13:18:25.122966 | orchestrator | Monday 02 June 2025 13:15:54 +0000 (0:00:01.502) 0:02:07.395 *********** 2025-06-02 13:18:25.122970 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:18:25.122974 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:18:25.122977 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:18:25.122981 | orchestrator | 2025-06-02 13:18:25.122985 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 13:18:25.122989 | orchestrator | 2025-06-02 13:18:25.122992 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 13:18:25.122996 | orchestrator | Monday 02 June 2025 13:16:03 +0000 (0:00:08.690) 0:02:16.086 *********** 2025-06-02 13:18:25.123000 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:25.123004 | orchestrator | 2025-06-02 13:18:25.123007 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 13:18:25.123011 | orchestrator | Monday 02 June 2025 13:16:04 +0000 (0:00:00.842) 0:02:16.929 *********** 2025-06-02 13:18:25.123015 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123021 | orchestrator | 2025-06-02 13:18:25.123027 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 13:18:25.123033 | orchestrator | Monday 02 June 2025 13:16:04 +0000 (0:00:00.444) 0:02:17.373 *********** 2025-06-02 13:18:25.123039 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 13:18:25.123045 | orchestrator | 2025-06-02 13:18:25.123055 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 13:18:25.123062 | orchestrator | Monday 02 June 2025 13:16:06 +0000 (0:00:01.074) 0:02:18.448 *********** 2025-06-02 13:18:25.123067 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123071 | orchestrator | 2025-06-02 13:18:25.123075 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 13:18:25.123079 | orchestrator | Monday 02 June 2025 13:16:06 +0000 (0:00:00.865) 0:02:19.313 *********** 2025-06-02 13:18:25.123083 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123086 | orchestrator | 2025-06-02 13:18:25.123090 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 13:18:25.123099 | orchestrator | Monday 02 June 2025 13:16:07 +0000 (0:00:00.640) 0:02:19.953 *********** 2025-06-02 13:18:25.123103 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 13:18:25.123107 | orchestrator | 2025-06-02 13:18:25.123111 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 13:18:25.123114 | orchestrator | Monday 02 June 2025 13:16:08 +0000 (0:00:01.243) 0:02:21.196 *********** 2025-06-02 13:18:25.123118 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 13:18:25.123122 | orchestrator | 2025-06-02 13:18:25.123126 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 13:18:25.123129 | orchestrator | Monday 02 June 2025 13:16:09 +0000 (0:00:00.794) 0:02:21.991 *********** 2025-06-02 13:18:25.123135 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123139 | orchestrator | 2025-06-02 13:18:25.123143 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 13:18:25.123147 | orchestrator | Monday 02 June 2025 13:16:09 +0000 (0:00:00.303) 0:02:22.294 *********** 2025-06-02 13:18:25.123150 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123154 | orchestrator | 2025-06-02 13:18:25.123158 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-02 13:18:25.123161 | orchestrator | 2025-06-02 13:18:25.123165 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-02 13:18:25.123169 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.376) 0:02:22.671 *********** 2025-06-02 13:18:25.123173 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:25.123177 | orchestrator | 2025-06-02 13:18:25.123180 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-02 13:18:25.123184 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.155) 0:02:22.826 *********** 2025-06-02 13:18:25.123188 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 13:18:25.123191 | orchestrator | 2025-06-02 13:18:25.123195 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-02 13:18:25.123199 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.202) 0:02:23.029 *********** 2025-06-02 13:18:25.123203 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:25.123206 | orchestrator | 2025-06-02 13:18:25.123210 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-02 13:18:25.123214 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.957) 0:02:23.987 *********** 2025-06-02 13:18:25.123218 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:25.123221 | orchestrator | 2025-06-02 13:18:25.123225 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-02 13:18:25.123229 | orchestrator | Monday 02 June 2025 13:16:12 +0000 (0:00:01.101) 0:02:25.089 *********** 2025-06-02 13:18:25.123233 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123236 | orchestrator | 2025-06-02 13:18:25.123240 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-02 13:18:25.123244 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:00.629) 0:02:25.718 *********** 2025-06-02 13:18:25.123248 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:25.123251 | orchestrator | 2025-06-02 13:18:25.123255 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-02 13:18:25.123259 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:00.340) 0:02:26.059 *********** 2025-06-02 13:18:25.123263 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123266 | orchestrator | 2025-06-02 13:18:25.123270 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-02 13:18:25.123274 | orchestrator | Monday 02 June 2025 13:16:18 +0000 (0:00:05.026) 0:02:31.085 *********** 2025-06-02 13:18:25.123278 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123281 | orchestrator | 2025-06-02 13:18:25.123285 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-02 13:18:25.123289 | orchestrator | Monday 02 June 2025 13:16:29 +0000 (0:00:10.803) 0:02:41.889 *********** 2025-06-02 13:18:25.123296 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:25.123300 | orchestrator | 2025-06-02 13:18:25.123304 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-02 13:18:25.123307 | orchestrator | 2025-06-02 13:18:25.123311 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-02 13:18:25.123315 | orchestrator | Monday 02 June 2025 13:16:29 +0000 (0:00:00.417) 0:02:42.306 *********** 2025-06-02 13:18:25.123319 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.123322 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.123326 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.123330 | orchestrator | 2025-06-02 13:18:25.123334 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-02 13:18:25.123337 | orchestrator | Monday 02 June 2025 13:16:30 +0000 (0:00:00.416) 0:02:42.722 *********** 2025-06-02 13:18:25.123341 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.123349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.123352 | orchestrator | 2025-06-02 13:18:25.123356 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-02 13:18:25.123360 | orchestrator | Monday 02 June 2025 13:16:30 +0000 (0:00:00.250) 0:02:42.972 *********** 2025-06-02 13:18:25.123364 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:18:25.123368 | orchestrator | 2025-06-02 13:18:25.123371 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-02 13:18:25.123378 | orchestrator | Monday 02 June 2025 13:16:30 +0000 (0:00:00.394) 0:02:43.367 *********** 2025-06-02 13:18:25.123382 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 13:18:25.123385 | orchestrator | 2025-06-02 13:18:25.123389 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-02 13:18:25.123393 | orchestrator | Monday 02 June 2025 13:16:31 +0000 (0:00:00.767) 0:02:44.134 *********** 2025-06-02 13:18:25.123397 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:18:25.123400 | orchestrator | 2025-06-02 13:18:25.123404 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-02 13:18:25.123408 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.704) 0:02:44.839 *********** 2025-06-02 13:18:25.123412 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123415 | orchestrator | 2025-06-02 13:18:25.123419 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-02 13:18:25.123423 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.451) 0:02:45.291 *********** 2025-06-02 13:18:25.123427 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:18:25.123430 | orchestrator | 2025-06-02 13:18:25.123434 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-02 13:18:25.123438 | orchestrator | Monday 02 June 2025 13:16:33 +0000 (0:00:00.894) 0:02:46.185 *********** 2025-06-02 13:18:25.123443 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123447 | orchestrator | 2025-06-02 13:18:25.123451 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-02 13:18:25.123455 | orchestrator | Monday 02 June 2025 13:16:33 +0000 (0:00:00.165) 0:02:46.351 *********** 2025-06-02 13:18:25.123459 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123462 | orchestrator | 2025-06-02 13:18:25.123466 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-02 13:18:25.123470 | orchestrator | Monday 02 June 2025 13:16:34 +0000 (0:00:00.166) 0:02:46.518 *********** 2025-06-02 13:18:25.123473 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123477 | orchestrator | 2025-06-02 13:18:25.123481 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-02 13:18:25.123485 | orchestrator | Monday 02 June 2025 13:16:34 +0000 (0:00:00.241) 0:02:46.759 *********** 2025-06-02 13:18:25.123488 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123496 | orchestrator | 2025-06-02 13:18:25.123503 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-02 13:18:25.123507 | orchestrator | Monday 02 June 2025 13:16:34 +0000 (0:00:00.196) 0:02:46.956 *********** 2025-06-02 13:18:25.123511 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 13:18:25.123514 | orchestrator | 2025-06-02 13:18:25.123518 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-02 13:18:25.123525 | orchestrator | Monday 02 June 2025 13:16:38 +0000 (0:00:03.882) 0:02:50.839 *********** 2025-06-02 13:18:25.123529 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-02 13:18:25.123533 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-02 13:18:25.123536 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (29 retries left). 2025-06-02 13:18:25.123540 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-02 13:18:25.123544 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-02 13:18:25.123548 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-02 13:18:25.123551 | orchestrator | 2025-06-02 13:18:25.123555 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-02 13:18:25.123559 | orchestrator | Monday 02 June 2025 13:17:58 +0000 (0:01:19.805) 0:04:10.644 *********** 2025-06-02 13:18:25.123562 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:18:25.123566 | orchestrator | 2025-06-02 13:18:25.123570 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-02 13:18:25.123573 | orchestrator | Monday 02 June 2025 13:17:59 +0000 (0:00:01.265) 0:04:11.910 *********** 2025-06-02 13:18:25.123577 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 13:18:25.123581 | orchestrator | 2025-06-02 13:18:25.123584 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-02 13:18:25.123588 | orchestrator | Monday 02 June 2025 13:18:01 +0000 (0:00:01.533) 0:04:13.443 *********** 2025-06-02 13:18:25.123592 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 13:18:25.123595 | orchestrator | 2025-06-02 13:18:25.123620 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-02 13:18:25.123625 | orchestrator | Monday 02 June 2025 13:18:02 +0000 (0:00:01.074) 0:04:14.517 *********** 2025-06-02 13:18:25.123629 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123632 | orchestrator | 2025-06-02 13:18:25.123636 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-02 13:18:25.123640 | orchestrator | Monday 02 June 2025 13:18:02 +0000 (0:00:00.197) 0:04:14.715 *********** 2025-06-02 13:18:25.123644 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-02 13:18:25.123647 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-02 13:18:25.123651 | orchestrator | 2025-06-02 13:18:25.123655 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-02 13:18:25.123658 | orchestrator | Monday 02 June 2025 13:18:04 +0000 (0:00:02.486) 0:04:17.202 *********** 2025-06-02 13:18:25.123662 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123666 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.123669 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.123673 | orchestrator | 2025-06-02 13:18:25.123677 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-02 13:18:25.123680 | orchestrator | Monday 02 June 2025 13:18:05 +0000 (0:00:00.699) 0:04:17.901 *********** 2025-06-02 13:18:25.123687 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.123691 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.123695 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.123699 | orchestrator | 2025-06-02 13:18:25.123702 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-02 13:18:25.123709 | orchestrator | 2025-06-02 13:18:25.123713 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-02 13:18:25.123716 | orchestrator | Monday 02 June 2025 13:18:06 +0000 (0:00:00.955) 0:04:18.856 *********** 2025-06-02 13:18:25.123720 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:25.123724 | orchestrator | 2025-06-02 13:18:25.123728 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-02 13:18:25.123731 | orchestrator | Monday 02 June 2025 13:18:06 +0000 (0:00:00.127) 0:04:18.984 *********** 2025-06-02 13:18:25.123735 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 13:18:25.123739 | orchestrator | 2025-06-02 13:18:25.123743 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-02 13:18:25.123746 | orchestrator | Monday 02 June 2025 13:18:06 +0000 (0:00:00.400) 0:04:19.384 *********** 2025-06-02 13:18:25.123750 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:25.123754 | orchestrator | 2025-06-02 13:18:25.123757 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-02 13:18:25.123761 | orchestrator | 2025-06-02 13:18:25.123767 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-02 13:18:25.123771 | orchestrator | Monday 02 June 2025 13:18:12 +0000 (0:00:05.994) 0:04:25.379 *********** 2025-06-02 13:18:25.123774 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:18:25.123778 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:18:25.123782 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:18:25.123786 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:25.123789 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:25.123793 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:25.123797 | orchestrator | 2025-06-02 13:18:25.123801 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-02 13:18:25.123804 | orchestrator | Monday 02 June 2025 13:18:13 +0000 (0:00:00.454) 0:04:25.834 *********** 2025-06-02 13:18:25.123808 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 13:18:25.123812 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 13:18:25.123816 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 13:18:25.123819 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 13:18:25.123823 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 13:18:25.123827 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 13:18:25.123830 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 13:18:25.123834 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 13:18:25.123838 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 13:18:25.123841 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 13:18:25.123845 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 13:18:25.123849 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 13:18:25.123852 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 13:18:25.123856 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 13:18:25.123860 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 13:18:25.123863 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 13:18:25.123867 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 13:18:25.123871 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 13:18:25.123877 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 13:18:25.123880 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 13:18:25.123884 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 13:18:25.123888 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 13:18:25.123891 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 13:18:25.123895 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 13:18:25.123899 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 13:18:25.123902 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 13:18:25.123906 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 13:18:25.123910 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 13:18:25.123913 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 13:18:25.123920 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 13:18:25.123924 | orchestrator | 2025-06-02 13:18:25.123928 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-02 13:18:25.123931 | orchestrator | Monday 02 June 2025 13:18:22 +0000 (0:00:08.909) 0:04:34.743 *********** 2025-06-02 13:18:25.123935 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.123939 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.123942 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.123946 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123950 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.123953 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.123957 | orchestrator | 2025-06-02 13:18:25.123961 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-02 13:18:25.123964 | orchestrator | Monday 02 June 2025 13:18:22 +0000 (0:00:00.492) 0:04:35.235 *********** 2025-06-02 13:18:25.123968 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:18:25.123972 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:18:25.123976 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:18:25.123979 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:25.123983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:25.123987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:25.123990 | orchestrator | 2025-06-02 13:18:25.123994 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:18:25.124000 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:18:25.124004 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-02 13:18:25.124008 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 13:18:25.124012 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 13:18:25.124016 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 13:18:25.124020 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 13:18:25.124026 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 13:18:25.124030 | orchestrator | 2025-06-02 13:18:25.124033 | orchestrator | 2025-06-02 13:18:25.124037 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:18:25.124041 | orchestrator | Monday 02 June 2025 13:18:23 +0000 (0:00:00.497) 0:04:35.733 *********** 2025-06-02 13:18:25.124044 | orchestrator | =============================================================================== 2025-06-02 13:18:25.124048 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 79.81s 2025-06-02 13:18:25.124052 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.83s 2025-06-02 13:18:25.124056 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 16.09s 2025-06-02 13:18:25.124059 | orchestrator | kubectl : Install required packages ------------------------------------ 10.80s 2025-06-02 13:18:25.124063 | orchestrator | Manage labels ----------------------------------------------------------- 8.91s 2025-06-02 13:18:25.124067 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.69s 2025-06-02 13:18:25.124070 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.00s 2025-06-02 13:18:25.124074 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.98s 2025-06-02 13:18:25.124078 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.03s 2025-06-02 13:18:25.124081 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 3.88s 2025-06-02 13:18:25.124085 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.91s 2025-06-02 13:18:25.124089 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.49s 2025-06-02 13:18:25.124092 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.15s 2025-06-02 13:18:25.124096 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.02s 2025-06-02 13:18:25.124100 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.86s 2025-06-02 13:18:25.124103 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.82s 2025-06-02 13:18:25.124107 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.69s 2025-06-02 13:18:25.124111 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.53s 2025-06-02 13:18:25.124115 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.50s 2025-06-02 13:18:25.124118 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.48s 2025-06-02 13:18:25.124122 | orchestrator | 2025-06-02 13:18:25 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:25.124128 | orchestrator | 2025-06-02 13:18:25 | INFO  | Task 6cb39f2b-10d1-454b-92e6-8d7ad5f755cb is in state STARTED 2025-06-02 13:18:25.124211 | orchestrator | 2025-06-02 13:18:25 | INFO  | Task 4a53dae8-5615-49d1-932b-26a18c89fc68 is in state STARTED 2025-06-02 13:18:25.124222 | orchestrator | 2025-06-02 13:18:25 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:25.124229 | orchestrator | 2025-06-02 13:18:25 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:25.124235 | orchestrator | 2025-06-02 13:18:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:28.163105 | orchestrator | 2025-06-02 13:18:28 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:28.163851 | orchestrator | 2025-06-02 13:18:28 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:28.166161 | orchestrator | 2025-06-02 13:18:28 | INFO  | Task 6cb39f2b-10d1-454b-92e6-8d7ad5f755cb is in state STARTED 2025-06-02 13:18:28.168787 | orchestrator | 2025-06-02 13:18:28 | INFO  | Task 4a53dae8-5615-49d1-932b-26a18c89fc68 is in state STARTED 2025-06-02 13:18:28.170513 | orchestrator | 2025-06-02 13:18:28 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:28.172910 | orchestrator | 2025-06-02 13:18:28 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:28.172986 | orchestrator | 2025-06-02 13:18:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:31.205850 | orchestrator | 2025-06-02 13:18:31 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:31.208440 | orchestrator | 2025-06-02 13:18:31 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:31.212039 | orchestrator | 2025-06-02 13:18:31 | INFO  | Task 6cb39f2b-10d1-454b-92e6-8d7ad5f755cb is in state STARTED 2025-06-02 13:18:31.213147 | orchestrator | 2025-06-02 13:18:31 | INFO  | Task 4a53dae8-5615-49d1-932b-26a18c89fc68 is in state SUCCESS 2025-06-02 13:18:31.213718 | orchestrator | 2025-06-02 13:18:31 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:31.217197 | orchestrator | 2025-06-02 13:18:31 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:31.217223 | orchestrator | 2025-06-02 13:18:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:34.269424 | orchestrator | 2025-06-02 13:18:34 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:34.269511 | orchestrator | 2025-06-02 13:18:34 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:34.269525 | orchestrator | 2025-06-02 13:18:34 | INFO  | Task 6cb39f2b-10d1-454b-92e6-8d7ad5f755cb is in state SUCCESS 2025-06-02 13:18:34.270309 | orchestrator | 2025-06-02 13:18:34 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:34.271537 | orchestrator | 2025-06-02 13:18:34 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:34.271602 | orchestrator | 2025-06-02 13:18:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:37.303259 | orchestrator | 2025-06-02 13:18:37 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:37.304807 | orchestrator | 2025-06-02 13:18:37 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:37.306139 | orchestrator | 2025-06-02 13:18:37 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:37.307434 | orchestrator | 2025-06-02 13:18:37 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:37.307684 | orchestrator | 2025-06-02 13:18:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:40.345969 | orchestrator | 2025-06-02 13:18:40 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:40.349881 | orchestrator | 2025-06-02 13:18:40 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:40.349931 | orchestrator | 2025-06-02 13:18:40 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:40.349944 | orchestrator | 2025-06-02 13:18:40 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:40.349956 | orchestrator | 2025-06-02 13:18:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:43.371899 | orchestrator | 2025-06-02 13:18:43 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:43.374672 | orchestrator | 2025-06-02 13:18:43 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state STARTED 2025-06-02 13:18:43.375192 | orchestrator | 2025-06-02 13:18:43 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:43.375838 | orchestrator | 2025-06-02 13:18:43 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:43.375865 | orchestrator | 2025-06-02 13:18:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:46.414014 | orchestrator | 2025-06-02 13:18:46 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:46.416034 | orchestrator | 2025-06-02 13:18:46 | INFO  | Task 7c19b369-c7cb-4b0b-9782-d73bc990c4c5 is in state SUCCESS 2025-06-02 13:18:46.418158 | orchestrator | 2025-06-02 13:18:46.418213 | orchestrator | 2025-06-02 13:18:46.418235 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-02 13:18:46.418250 | orchestrator | 2025-06-02 13:18:46.418278 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 13:18:46.418290 | orchestrator | Monday 02 June 2025 13:18:27 +0000 (0:00:00.140) 0:00:00.140 *********** 2025-06-02 13:18:46.418301 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 13:18:46.418312 | orchestrator | 2025-06-02 13:18:46.418323 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 13:18:46.418334 | orchestrator | Monday 02 June 2025 13:18:27 +0000 (0:00:00.686) 0:00:00.827 *********** 2025-06-02 13:18:46.418345 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:46.418356 | orchestrator | 2025-06-02 13:18:46.418416 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-02 13:18:46.418427 | orchestrator | Monday 02 June 2025 13:18:29 +0000 (0:00:01.109) 0:00:01.936 *********** 2025-06-02 13:18:46.418438 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:46.418449 | orchestrator | 2025-06-02 13:18:46.418460 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:18:46.418471 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:18:46.418483 | orchestrator | 2025-06-02 13:18:46.418494 | orchestrator | 2025-06-02 13:18:46.418505 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:18:46.418515 | orchestrator | Monday 02 June 2025 13:18:29 +0000 (0:00:00.390) 0:00:02.327 *********** 2025-06-02 13:18:46.418526 | orchestrator | =============================================================================== 2025-06-02 13:18:46.418536 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.11s 2025-06-02 13:18:46.418547 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-06-02 13:18:46.418558 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2025-06-02 13:18:46.418576 | orchestrator | 2025-06-02 13:18:46.418594 | orchestrator | 2025-06-02 13:18:46.418612 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 13:18:46.418623 | orchestrator | 2025-06-02 13:18:46.418634 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 13:18:46.418670 | orchestrator | Monday 02 June 2025 13:18:27 +0000 (0:00:00.175) 0:00:00.175 *********** 2025-06-02 13:18:46.418683 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:46.418695 | orchestrator | 2025-06-02 13:18:46.418706 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 13:18:46.418725 | orchestrator | Monday 02 June 2025 13:18:27 +0000 (0:00:00.503) 0:00:00.678 *********** 2025-06-02 13:18:46.418744 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:46.418760 | orchestrator | 2025-06-02 13:18:46.418773 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 13:18:46.418785 | orchestrator | Monday 02 June 2025 13:18:28 +0000 (0:00:00.512) 0:00:01.191 *********** 2025-06-02 13:18:46.418820 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 13:18:46.418831 | orchestrator | 2025-06-02 13:18:46.418842 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 13:18:46.418853 | orchestrator | Monday 02 June 2025 13:18:29 +0000 (0:00:00.762) 0:00:01.953 *********** 2025-06-02 13:18:46.418864 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:46.418874 | orchestrator | 2025-06-02 13:18:46.418885 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 13:18:46.418896 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:01.046) 0:00:03.000 *********** 2025-06-02 13:18:46.418906 | orchestrator | changed: [testbed-manager] 2025-06-02 13:18:46.418917 | orchestrator | 2025-06-02 13:18:46.418928 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 13:18:46.418938 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:00.472) 0:00:03.473 *********** 2025-06-02 13:18:46.418949 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 13:18:46.418960 | orchestrator | 2025-06-02 13:18:46.418971 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 13:18:46.418981 | orchestrator | Monday 02 June 2025 13:18:32 +0000 (0:00:01.514) 0:00:04.987 *********** 2025-06-02 13:18:46.418992 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 13:18:46.419002 | orchestrator | 2025-06-02 13:18:46.419013 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 13:18:46.419023 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.724) 0:00:05.712 *********** 2025-06-02 13:18:46.419034 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:46.419044 | orchestrator | 2025-06-02 13:18:46.419055 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 13:18:46.419066 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.318) 0:00:06.030 *********** 2025-06-02 13:18:46.419076 | orchestrator | ok: [testbed-manager] 2025-06-02 13:18:46.419087 | orchestrator | 2025-06-02 13:18:46.419097 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:18:46.419108 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:18:46.419119 | orchestrator | 2025-06-02 13:18:46.419129 | orchestrator | 2025-06-02 13:18:46.419140 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:18:46.419150 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.252) 0:00:06.283 *********** 2025-06-02 13:18:46.419161 | orchestrator | =============================================================================== 2025-06-02 13:18:46.419172 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2025-06-02 13:18:46.419182 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.05s 2025-06-02 13:18:46.419193 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.76s 2025-06-02 13:18:46.419218 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2025-06-02 13:18:46.419229 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2025-06-02 13:18:46.419248 | orchestrator | Get home directory of operator user ------------------------------------- 0.50s 2025-06-02 13:18:46.419259 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.47s 2025-06-02 13:18:46.419269 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.32s 2025-06-02 13:18:46.419283 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.25s 2025-06-02 13:18:46.419301 | orchestrator | 2025-06-02 13:18:46.419312 | orchestrator | 2025-06-02 13:18:46.419329 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-02 13:18:46.419342 | orchestrator | 2025-06-02 13:18:46.419352 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 13:18:46.419363 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.081) 0:00:00.081 *********** 2025-06-02 13:18:46.419382 | orchestrator | ok: [localhost] => { 2025-06-02 13:18:46.419394 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-02 13:18:46.419405 | orchestrator | } 2025-06-02 13:18:46.419416 | orchestrator | 2025-06-02 13:18:46.419427 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-02 13:18:46.419437 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.042) 0:00:00.124 *********** 2025-06-02 13:18:46.419449 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-02 13:18:46.419461 | orchestrator | ...ignoring 2025-06-02 13:18:46.419472 | orchestrator | 2025-06-02 13:18:46.419482 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-02 13:18:46.419493 | orchestrator | Monday 02 June 2025 13:16:35 +0000 (0:00:03.606) 0:00:03.730 *********** 2025-06-02 13:18:46.419507 | orchestrator | skipping: [localhost] 2025-06-02 13:18:46.419523 | orchestrator | 2025-06-02 13:18:46.419534 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-02 13:18:46.419544 | orchestrator | Monday 02 June 2025 13:16:35 +0000 (0:00:00.091) 0:00:03.822 *********** 2025-06-02 13:18:46.419555 | orchestrator | ok: [localhost] 2025-06-02 13:18:46.419565 | orchestrator | 2025-06-02 13:18:46.419576 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:18:46.419586 | orchestrator | 2025-06-02 13:18:46.419597 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:18:46.419607 | orchestrator | Monday 02 June 2025 13:16:36 +0000 (0:00:00.218) 0:00:04.041 *********** 2025-06-02 13:18:46.419618 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:46.419629 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:46.419639 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:46.419701 | orchestrator | 2025-06-02 13:18:46.419721 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:18:46.419739 | orchestrator | Monday 02 June 2025 13:16:36 +0000 (0:00:00.318) 0:00:04.360 *********** 2025-06-02 13:18:46.419750 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-02 13:18:46.419761 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-02 13:18:46.419772 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-02 13:18:46.419782 | orchestrator | 2025-06-02 13:18:46.419793 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-02 13:18:46.419804 | orchestrator | 2025-06-02 13:18:46.419814 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 13:18:46.419825 | orchestrator | Monday 02 June 2025 13:16:37 +0000 (0:00:00.580) 0:00:04.940 *********** 2025-06-02 13:18:46.419836 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:18:46.419846 | orchestrator | 2025-06-02 13:18:46.419857 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 13:18:46.419868 | orchestrator | Monday 02 June 2025 13:16:37 +0000 (0:00:00.880) 0:00:05.821 *********** 2025-06-02 13:18:46.419879 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:46.419889 | orchestrator | 2025-06-02 13:18:46.419900 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-02 13:18:46.419913 | orchestrator | Monday 02 June 2025 13:16:38 +0000 (0:00:00.950) 0:00:06.772 *********** 2025-06-02 13:18:46.419932 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.419951 | orchestrator | 2025-06-02 13:18:46.419970 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-02 13:18:46.419987 | orchestrator | Monday 02 June 2025 13:16:39 +0000 (0:00:00.302) 0:00:07.075 *********** 2025-06-02 13:18:46.419998 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.420009 | orchestrator | 2025-06-02 13:18:46.420019 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-02 13:18:46.420039 | orchestrator | Monday 02 June 2025 13:16:39 +0000 (0:00:00.286) 0:00:07.362 *********** 2025-06-02 13:18:46.420050 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.420060 | orchestrator | 2025-06-02 13:18:46.420071 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-02 13:18:46.420081 | orchestrator | Monday 02 June 2025 13:16:39 +0000 (0:00:00.308) 0:00:07.670 *********** 2025-06-02 13:18:46.420092 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.420103 | orchestrator | 2025-06-02 13:18:46.420113 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 13:18:46.420124 | orchestrator | Monday 02 June 2025 13:16:40 +0000 (0:00:00.431) 0:00:08.102 *********** 2025-06-02 13:18:46.420150 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:18:46.420171 | orchestrator | 2025-06-02 13:18:46.420183 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 13:18:46.420203 | orchestrator | Monday 02 June 2025 13:16:41 +0000 (0:00:00.934) 0:00:09.037 *********** 2025-06-02 13:18:46.420214 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:46.420225 | orchestrator | 2025-06-02 13:18:46.420235 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-02 13:18:46.420254 | orchestrator | Monday 02 June 2025 13:16:41 +0000 (0:00:00.724) 0:00:09.762 *********** 2025-06-02 13:18:46.420270 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.420281 | orchestrator | 2025-06-02 13:18:46.420291 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-02 13:18:46.420302 | orchestrator | Monday 02 June 2025 13:16:42 +0000 (0:00:00.447) 0:00:10.209 *********** 2025-06-02 13:18:46.420313 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.420332 | orchestrator | 2025-06-02 13:18:46.420351 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-02 13:18:46.420370 | orchestrator | Monday 02 June 2025 13:16:42 +0000 (0:00:00.360) 0:00:10.570 *********** 2025-06-02 13:18:46.420395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.420423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.420459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.420474 | orchestrator | 2025-06-02 13:18:46.420486 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-02 13:18:46.420497 | orchestrator | Monday 02 June 2025 13:16:44 +0000 (0:00:01.309) 0:00:11.879 *********** 2025-06-02 13:18:46.420530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.420555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.420577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.420610 | orchestrator | 2025-06-02 13:18:46.420629 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-02 13:18:46.420733 | orchestrator | Monday 02 June 2025 13:16:46 +0000 (0:00:02.381) 0:00:14.261 *********** 2025-06-02 13:18:46.420750 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 13:18:46.420761 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 13:18:46.420772 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 13:18:46.420783 | orchestrator | 2025-06-02 13:18:46.420794 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-02 13:18:46.420805 | orchestrator | Monday 02 June 2025 13:16:48 +0000 (0:00:02.144) 0:00:16.405 *********** 2025-06-02 13:18:46.420815 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 13:18:46.420826 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 13:18:46.420836 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 13:18:46.420854 | orchestrator | 2025-06-02 13:18:46.420884 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-02 13:18:46.420897 | orchestrator | Monday 02 June 2025 13:16:50 +0000 (0:00:01.782) 0:00:18.188 *********** 2025-06-02 13:18:46.420914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 13:18:46.420925 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 13:18:46.420939 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 13:18:46.420994 | orchestrator | 2025-06-02 13:18:46.421008 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-02 13:18:46.421019 | orchestrator | Monday 02 June 2025 13:16:51 +0000 (0:00:01.471) 0:00:19.659 *********** 2025-06-02 13:18:46.421030 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 13:18:46.421041 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 13:18:46.421052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 13:18:46.421063 | orchestrator | 2025-06-02 13:18:46.421073 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-02 13:18:46.421084 | orchestrator | Monday 02 June 2025 13:16:53 +0000 (0:00:01.519) 0:00:21.179 *********** 2025-06-02 13:18:46.421094 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 13:18:46.421105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 13:18:46.421116 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 13:18:46.421126 | orchestrator | 2025-06-02 13:18:46.421138 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-02 13:18:46.421157 | orchestrator | Monday 02 June 2025 13:16:54 +0000 (0:00:01.258) 0:00:22.437 *********** 2025-06-02 13:18:46.421168 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 13:18:46.421179 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 13:18:46.421189 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 13:18:46.421200 | orchestrator | 2025-06-02 13:18:46.421210 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 13:18:46.421221 | orchestrator | Monday 02 June 2025 13:16:55 +0000 (0:00:01.402) 0:00:23.840 *********** 2025-06-02 13:18:46.421232 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.421243 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:46.421253 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:46.421264 | orchestrator | 2025-06-02 13:18:46.421275 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-02 13:18:46.421285 | orchestrator | Monday 02 June 2025 13:16:56 +0000 (0:00:00.434) 0:00:24.275 *********** 2025-06-02 13:18:46.421298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.421326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.421340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:18:46.421358 | orchestrator | 2025-06-02 13:18:46.421370 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-02 13:18:46.421380 | orchestrator | Monday 02 June 2025 13:16:57 +0000 (0:00:01.489) 0:00:25.764 *********** 2025-06-02 13:18:46.421391 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:46.421402 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:46.421412 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:46.421423 | orchestrator | 2025-06-02 13:18:46.421434 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-02 13:18:46.421444 | orchestrator | Monday 02 June 2025 13:16:58 +0000 (0:00:00.960) 0:00:26.725 *********** 2025-06-02 13:18:46.421455 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:46.421466 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:46.421476 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:46.421487 | orchestrator | 2025-06-02 13:18:46.421498 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-02 13:18:46.421508 | orchestrator | Monday 02 June 2025 13:17:06 +0000 (0:00:07.699) 0:00:34.424 *********** 2025-06-02 13:18:46.421519 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:46.421530 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:46.421540 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:46.421551 | orchestrator | 2025-06-02 13:18:46.421561 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 13:18:46.421572 | orchestrator | 2025-06-02 13:18:46.421583 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 13:18:46.421594 | orchestrator | Monday 02 June 2025 13:17:06 +0000 (0:00:00.388) 0:00:34.813 *********** 2025-06-02 13:18:46.421604 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:46.421615 | orchestrator | 2025-06-02 13:18:46.421626 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 13:18:46.421637 | orchestrator | Monday 02 June 2025 13:17:07 +0000 (0:00:00.585) 0:00:35.398 *********** 2025-06-02 13:18:46.421742 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:18:46.421757 | orchestrator | 2025-06-02 13:18:46.421768 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 13:18:46.421779 | orchestrator | Monday 02 June 2025 13:17:07 +0000 (0:00:00.218) 0:00:35.616 *********** 2025-06-02 13:18:46.421790 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:46.421800 | orchestrator | 2025-06-02 13:18:46.421811 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 13:18:46.421821 | orchestrator | Monday 02 June 2025 13:17:14 +0000 (0:00:06.544) 0:00:42.161 *********** 2025-06-02 13:18:46.421832 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:18:46.421843 | orchestrator | 2025-06-02 13:18:46.421853 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 13:18:46.421871 | orchestrator | 2025-06-02 13:18:46.421890 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 13:18:46.421908 | orchestrator | Monday 02 June 2025 13:18:03 +0000 (0:00:49.671) 0:01:31.832 *********** 2025-06-02 13:18:46.421920 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:46.421931 | orchestrator | 2025-06-02 13:18:46.421942 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 13:18:46.421952 | orchestrator | Monday 02 June 2025 13:18:04 +0000 (0:00:00.657) 0:01:32.490 *********** 2025-06-02 13:18:46.421963 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:18:46.421973 | orchestrator | 2025-06-02 13:18:46.421990 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 13:18:46.422009 | orchestrator | Monday 02 June 2025 13:18:05 +0000 (0:00:00.763) 0:01:33.254 *********** 2025-06-02 13:18:46.422096 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:46.422117 | orchestrator | 2025-06-02 13:18:46.422139 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 13:18:46.422160 | orchestrator | Monday 02 June 2025 13:18:07 +0000 (0:00:01.885) 0:01:35.139 *********** 2025-06-02 13:18:46.422179 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:18:46.422190 | orchestrator | 2025-06-02 13:18:46.422201 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 13:18:46.422211 | orchestrator | 2025-06-02 13:18:46.422222 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 13:18:46.422242 | orchestrator | Monday 02 June 2025 13:18:21 +0000 (0:00:14.200) 0:01:49.340 *********** 2025-06-02 13:18:46.422253 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:46.422264 | orchestrator | 2025-06-02 13:18:46.422275 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 13:18:46.422286 | orchestrator | Monday 02 June 2025 13:18:22 +0000 (0:00:00.638) 0:01:49.979 *********** 2025-06-02 13:18:46.422297 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:18:46.422307 | orchestrator | 2025-06-02 13:18:46.422318 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 13:18:46.422331 | orchestrator | Monday 02 June 2025 13:18:22 +0000 (0:00:00.209) 0:01:50.188 *********** 2025-06-02 13:18:46.422350 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:46.422369 | orchestrator | 2025-06-02 13:18:46.422390 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 13:18:46.422408 | orchestrator | Monday 02 June 2025 13:18:28 +0000 (0:00:06.511) 0:01:56.700 *********** 2025-06-02 13:18:46.422428 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:18:46.422448 | orchestrator | 2025-06-02 13:18:46.422466 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-02 13:18:46.422478 | orchestrator | 2025-06-02 13:18:46.422488 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-02 13:18:46.422573 | orchestrator | Monday 02 June 2025 13:18:39 +0000 (0:00:10.817) 0:02:07.517 *********** 2025-06-02 13:18:46.422594 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:18:46.422605 | orchestrator | 2025-06-02 13:18:46.422617 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-02 13:18:46.422636 | orchestrator | Monday 02 June 2025 13:18:40 +0000 (0:00:01.179) 0:02:08.697 *********** 2025-06-02 13:18:46.422675 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 13:18:46.422687 | orchestrator | enable_outward_rabbitmq_True 2025-06-02 13:18:46.422697 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 13:18:46.422708 | orchestrator | outward_rabbitmq_restart 2025-06-02 13:18:46.422719 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:18:46.422729 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:18:46.422740 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:18:46.422751 | orchestrator | 2025-06-02 13:18:46.422761 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-02 13:18:46.422772 | orchestrator | skipping: no hosts matched 2025-06-02 13:18:46.422783 | orchestrator | 2025-06-02 13:18:46.422793 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-02 13:18:46.422804 | orchestrator | skipping: no hosts matched 2025-06-02 13:18:46.422814 | orchestrator | 2025-06-02 13:18:46.422825 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-02 13:18:46.422835 | orchestrator | skipping: no hosts matched 2025-06-02 13:18:46.422846 | orchestrator | 2025-06-02 13:18:46.422857 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:18:46.422868 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 13:18:46.422888 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 13:18:46.422899 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:18:46.422910 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:18:46.422921 | orchestrator | 2025-06-02 13:18:46.422931 | orchestrator | 2025-06-02 13:18:46.422942 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:18:46.422953 | orchestrator | Monday 02 June 2025 13:18:43 +0000 (0:00:02.646) 0:02:11.343 *********** 2025-06-02 13:18:46.422963 | orchestrator | =============================================================================== 2025-06-02 13:18:46.422974 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 74.69s 2025-06-02 13:18:46.422984 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.94s 2025-06-02 13:18:46.422995 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.70s 2025-06-02 13:18:46.423006 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.61s 2025-06-02 13:18:46.423016 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.65s 2025-06-02 13:18:46.423027 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.38s 2025-06-02 13:18:46.423037 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.14s 2025-06-02 13:18:46.423048 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.88s 2025-06-02 13:18:46.423058 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.78s 2025-06-02 13:18:46.423069 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.52s 2025-06-02 13:18:46.423080 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.49s 2025-06-02 13:18:46.423090 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.47s 2025-06-02 13:18:46.423101 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.40s 2025-06-02 13:18:46.423111 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.31s 2025-06-02 13:18:46.423122 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.26s 2025-06-02 13:18:46.423142 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.19s 2025-06-02 13:18:46.423153 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.18s 2025-06-02 13:18:46.423168 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.96s 2025-06-02 13:18:46.423179 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.95s 2025-06-02 13:18:46.423190 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.93s 2025-06-02 13:18:46.423200 | orchestrator | 2025-06-02 13:18:46 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:46.423211 | orchestrator | 2025-06-02 13:18:46 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:46.423222 | orchestrator | 2025-06-02 13:18:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:49.470095 | orchestrator | 2025-06-02 13:18:49 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:49.474934 | orchestrator | 2025-06-02 13:18:49 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:49.475033 | orchestrator | 2025-06-02 13:18:49 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:49.475051 | orchestrator | 2025-06-02 13:18:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:52.519302 | orchestrator | 2025-06-02 13:18:52 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:52.520377 | orchestrator | 2025-06-02 13:18:52 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:52.523148 | orchestrator | 2025-06-02 13:18:52 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:52.523197 | orchestrator | 2025-06-02 13:18:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:55.565190 | orchestrator | 2025-06-02 13:18:55 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:55.565496 | orchestrator | 2025-06-02 13:18:55 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:55.566552 | orchestrator | 2025-06-02 13:18:55 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:55.566597 | orchestrator | 2025-06-02 13:18:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:18:58.623938 | orchestrator | 2025-06-02 13:18:58 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:18:58.624640 | orchestrator | 2025-06-02 13:18:58 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:18:58.626276 | orchestrator | 2025-06-02 13:18:58 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:18:58.626307 | orchestrator | 2025-06-02 13:18:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:01.680321 | orchestrator | 2025-06-02 13:19:01 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:01.680731 | orchestrator | 2025-06-02 13:19:01 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:01.681476 | orchestrator | 2025-06-02 13:19:01 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:01.681511 | orchestrator | 2025-06-02 13:19:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:04.726345 | orchestrator | 2025-06-02 13:19:04 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:04.727303 | orchestrator | 2025-06-02 13:19:04 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:04.728972 | orchestrator | 2025-06-02 13:19:04 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:04.729091 | orchestrator | 2025-06-02 13:19:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:07.768896 | orchestrator | 2025-06-02 13:19:07 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:07.769011 | orchestrator | 2025-06-02 13:19:07 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:07.769641 | orchestrator | 2025-06-02 13:19:07 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:07.769667 | orchestrator | 2025-06-02 13:19:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:10.805519 | orchestrator | 2025-06-02 13:19:10 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:10.805730 | orchestrator | 2025-06-02 13:19:10 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:10.805837 | orchestrator | 2025-06-02 13:19:10 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:10.805872 | orchestrator | 2025-06-02 13:19:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:13.845050 | orchestrator | 2025-06-02 13:19:13 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:13.845290 | orchestrator | 2025-06-02 13:19:13 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:13.846009 | orchestrator | 2025-06-02 13:19:13 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:13.846148 | orchestrator | 2025-06-02 13:19:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:16.880210 | orchestrator | 2025-06-02 13:19:16 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:16.880569 | orchestrator | 2025-06-02 13:19:16 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:16.881376 | orchestrator | 2025-06-02 13:19:16 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:16.881492 | orchestrator | 2025-06-02 13:19:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:19.916895 | orchestrator | 2025-06-02 13:19:19 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:19.919179 | orchestrator | 2025-06-02 13:19:19 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:19.920747 | orchestrator | 2025-06-02 13:19:19 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:19.921230 | orchestrator | 2025-06-02 13:19:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:22.959650 | orchestrator | 2025-06-02 13:19:22 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:22.960214 | orchestrator | 2025-06-02 13:19:22 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:22.961340 | orchestrator | 2025-06-02 13:19:22 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:22.961369 | orchestrator | 2025-06-02 13:19:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:26.011534 | orchestrator | 2025-06-02 13:19:26 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:26.011840 | orchestrator | 2025-06-02 13:19:26 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:26.014012 | orchestrator | 2025-06-02 13:19:26 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:26.014338 | orchestrator | 2025-06-02 13:19:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:29.065115 | orchestrator | 2025-06-02 13:19:29 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:29.065211 | orchestrator | 2025-06-02 13:19:29 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:29.065795 | orchestrator | 2025-06-02 13:19:29 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:29.065825 | orchestrator | 2025-06-02 13:19:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:32.125673 | orchestrator | 2025-06-02 13:19:32 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:32.128575 | orchestrator | 2025-06-02 13:19:32 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:32.131082 | orchestrator | 2025-06-02 13:19:32 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:32.131315 | orchestrator | 2025-06-02 13:19:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:35.175541 | orchestrator | 2025-06-02 13:19:35 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:35.178617 | orchestrator | 2025-06-02 13:19:35 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:35.179346 | orchestrator | 2025-06-02 13:19:35 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:35.179468 | orchestrator | 2025-06-02 13:19:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:38.214345 | orchestrator | 2025-06-02 13:19:38 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state STARTED 2025-06-02 13:19:38.215349 | orchestrator | 2025-06-02 13:19:38 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:38.217272 | orchestrator | 2025-06-02 13:19:38 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:38.217314 | orchestrator | 2025-06-02 13:19:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:41.259421 | orchestrator | 2025-06-02 13:19:41.259532 | orchestrator | 2025-06-02 13:19:41.259550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:19:41.259562 | orchestrator | 2025-06-02 13:19:41.259579 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:19:41.259599 | orchestrator | Monday 02 June 2025 13:17:15 +0000 (0:00:00.198) 0:00:00.198 *********** 2025-06-02 13:19:41.259611 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.259623 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.259793 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.259814 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:19:41.259833 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:19:41.259853 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:19:41.259867 | orchestrator | 2025-06-02 13:19:41.259878 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:19:41.259889 | orchestrator | Monday 02 June 2025 13:17:16 +0000 (0:00:01.289) 0:00:01.487 *********** 2025-06-02 13:19:41.259903 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-02 13:19:41.259916 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-02 13:19:41.259928 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-02 13:19:41.259940 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-02 13:19:41.259953 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-02 13:19:41.259966 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-02 13:19:41.259977 | orchestrator | 2025-06-02 13:19:41.259990 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-02 13:19:41.260002 | orchestrator | 2025-06-02 13:19:41.260015 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-02 13:19:41.260027 | orchestrator | Monday 02 June 2025 13:17:17 +0000 (0:00:00.885) 0:00:02.372 *********** 2025-06-02 13:19:41.260041 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:19:41.260055 | orchestrator | 2025-06-02 13:19:41.260067 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-02 13:19:41.260079 | orchestrator | Monday 02 June 2025 13:17:18 +0000 (0:00:01.039) 0:00:03.411 *********** 2025-06-02 13:19:41.260093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260224 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260239 | orchestrator | 2025-06-02 13:19:41.260252 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-02 13:19:41.260263 | orchestrator | Monday 02 June 2025 13:17:19 +0000 (0:00:01.232) 0:00:04.643 *********** 2025-06-02 13:19:41.260274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260349 | orchestrator | 2025-06-02 13:19:41.260360 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-02 13:19:41.260371 | orchestrator | Monday 02 June 2025 13:17:20 +0000 (0:00:01.357) 0:00:06.001 *********** 2025-06-02 13:19:41.260382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260439 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260470 | orchestrator | 2025-06-02 13:19:41.260481 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-02 13:19:41.260492 | orchestrator | Monday 02 June 2025 13:17:21 +0000 (0:00:00.908) 0:00:06.910 *********** 2025-06-02 13:19:41.260502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260581 | orchestrator | 2025-06-02 13:19:41.260592 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-02 13:19:41.260603 | orchestrator | Monday 02 June 2025 13:17:23 +0000 (0:00:01.553) 0:00:08.464 *********** 2025-06-02 13:19:41.260614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.260687 | orchestrator | 2025-06-02 13:19:41.260698 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-02 13:19:41.260709 | orchestrator | Monday 02 June 2025 13:17:24 +0000 (0:00:01.571) 0:00:10.035 *********** 2025-06-02 13:19:41.260720 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.260731 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.260742 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:19:41.260772 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.260783 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:19:41.260793 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:19:41.260804 | orchestrator | 2025-06-02 13:19:41.260815 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-02 13:19:41.260825 | orchestrator | Monday 02 June 2025 13:17:27 +0000 (0:00:02.579) 0:00:12.614 *********** 2025-06-02 13:19:41.260836 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-02 13:19:41.260847 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-02 13:19:41.260863 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-02 13:19:41.260880 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-02 13:19:41.260891 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-02 13:19:41.260901 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-02 13:19:41.260912 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 13:19:41.260923 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 13:19:41.260943 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 13:19:41.260953 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 13:19:41.260963 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 13:19:41.260974 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 13:19:41.260985 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 13:19:41.260998 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 13:19:41.261009 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 13:19:41.261019 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 13:19:41.261030 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 13:19:41.261041 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 13:19:41.261051 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 13:19:41.261063 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 13:19:41.261074 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 13:19:41.261084 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 13:19:41.261095 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 13:19:41.261105 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 13:19:41.261116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 13:19:41.261127 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 13:19:41.261137 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 13:19:41.261148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 13:19:41.261158 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 13:19:41.261201 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 13:19:41.261214 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 13:19:41.261234 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 13:19:41.261253 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 13:19:41.261273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 13:19:41.261294 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 13:19:41.261314 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 13:19:41.261329 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 13:19:41.261340 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 13:19:41.261359 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 13:19:41.261370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 13:19:41.261394 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 13:19:41.261406 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 13:19:41.261417 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-02 13:19:41.261428 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-02 13:19:41.261439 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-02 13:19:41.261450 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-02 13:19:41.261461 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-02 13:19:41.261472 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-02 13:19:41.261482 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 13:19:41.261493 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 13:19:41.261504 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 13:19:41.261515 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 13:19:41.261525 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 13:19:41.261536 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 13:19:41.261547 | orchestrator | 2025-06-02 13:19:41.261558 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 13:19:41.261569 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:17.964) 0:00:30.579 *********** 2025-06-02 13:19:41.261579 | orchestrator | 2025-06-02 13:19:41.261591 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 13:19:41.261601 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:00.078) 0:00:30.658 *********** 2025-06-02 13:19:41.261612 | orchestrator | 2025-06-02 13:19:41.261622 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 13:19:41.261633 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:00.064) 0:00:30.723 *********** 2025-06-02 13:19:41.261644 | orchestrator | 2025-06-02 13:19:41.261655 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 13:19:41.261665 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:00.061) 0:00:30.785 *********** 2025-06-02 13:19:41.261676 | orchestrator | 2025-06-02 13:19:41.261687 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 13:19:41.261698 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:00.062) 0:00:30.847 *********** 2025-06-02 13:19:41.261708 | orchestrator | 2025-06-02 13:19:41.261719 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 13:19:41.261729 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:00.062) 0:00:30.909 *********** 2025-06-02 13:19:41.261776 | orchestrator | 2025-06-02 13:19:41.261788 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-02 13:19:41.261799 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:00.062) 0:00:30.972 *********** 2025-06-02 13:19:41.261810 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:19:41.261821 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.261831 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.261842 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:19:41.261853 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.261863 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:19:41.261874 | orchestrator | 2025-06-02 13:19:41.261885 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-02 13:19:41.261896 | orchestrator | Monday 02 June 2025 13:17:47 +0000 (0:00:01.797) 0:00:32.769 *********** 2025-06-02 13:19:41.261906 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.261917 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.261928 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:19:41.261938 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.261949 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:19:41.261960 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:19:41.261970 | orchestrator | 2025-06-02 13:19:41.261981 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-02 13:19:41.261992 | orchestrator | 2025-06-02 13:19:41.262002 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 13:19:41.262013 | orchestrator | Monday 02 June 2025 13:18:24 +0000 (0:00:36.414) 0:01:09.184 *********** 2025-06-02 13:19:41.262085 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:19:41.262105 | orchestrator | 2025-06-02 13:19:41.262123 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 13:19:41.262143 | orchestrator | Monday 02 June 2025 13:18:24 +0000 (0:00:00.444) 0:01:09.628 *********** 2025-06-02 13:19:41.262160 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:19:41.262172 | orchestrator | 2025-06-02 13:19:41.262190 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-02 13:19:41.262202 | orchestrator | Monday 02 June 2025 13:18:25 +0000 (0:00:00.624) 0:01:10.253 *********** 2025-06-02 13:19:41.262212 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.262223 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.262234 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.262244 | orchestrator | 2025-06-02 13:19:41.262255 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-02 13:19:41.262266 | orchestrator | Monday 02 June 2025 13:18:26 +0000 (0:00:00.886) 0:01:11.140 *********** 2025-06-02 13:19:41.262276 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.262287 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.262297 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.262308 | orchestrator | 2025-06-02 13:19:41.262319 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-02 13:19:41.262330 | orchestrator | Monday 02 June 2025 13:18:26 +0000 (0:00:00.601) 0:01:11.742 *********** 2025-06-02 13:19:41.262340 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.262351 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.262361 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.262372 | orchestrator | 2025-06-02 13:19:41.262383 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-02 13:19:41.262394 | orchestrator | Monday 02 June 2025 13:18:27 +0000 (0:00:00.565) 0:01:12.307 *********** 2025-06-02 13:19:41.262404 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.262415 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.262426 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.262437 | orchestrator | 2025-06-02 13:19:41.262447 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-02 13:19:41.262472 | orchestrator | Monday 02 June 2025 13:18:27 +0000 (0:00:00.777) 0:01:13.084 *********** 2025-06-02 13:19:41.262483 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.262494 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.262504 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.262515 | orchestrator | 2025-06-02 13:19:41.262525 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-02 13:19:41.262536 | orchestrator | Monday 02 June 2025 13:18:28 +0000 (0:00:00.567) 0:01:13.651 *********** 2025-06-02 13:19:41.262547 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.262558 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.262568 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.262579 | orchestrator | 2025-06-02 13:19:41.262589 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-02 13:19:41.262600 | orchestrator | Monday 02 June 2025 13:18:28 +0000 (0:00:00.401) 0:01:14.053 *********** 2025-06-02 13:19:41.262611 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.262621 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.262632 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.262642 | orchestrator | 2025-06-02 13:19:41.262653 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-02 13:19:41.262664 | orchestrator | Monday 02 June 2025 13:18:29 +0000 (0:00:00.470) 0:01:14.523 *********** 2025-06-02 13:19:41.262675 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.262685 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.262696 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.262706 | orchestrator | 2025-06-02 13:19:41.262717 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-02 13:19:41.262728 | orchestrator | Monday 02 June 2025 13:18:29 +0000 (0:00:00.496) 0:01:15.020 *********** 2025-06-02 13:19:41.262738 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.262800 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.262813 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.262824 | orchestrator | 2025-06-02 13:19:41.262835 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-02 13:19:41.262845 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:00.212) 0:01:15.232 *********** 2025-06-02 13:19:41.262856 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.262867 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.262877 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.262888 | orchestrator | 2025-06-02 13:19:41.262899 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-02 13:19:41.262909 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:00.269) 0:01:15.502 *********** 2025-06-02 13:19:41.262920 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.262931 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.262941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.262952 | orchestrator | 2025-06-02 13:19:41.262962 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-02 13:19:41.262973 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:00.271) 0:01:15.774 *********** 2025-06-02 13:19:41.262984 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.262994 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263005 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263016 | orchestrator | 2025-06-02 13:19:41.263026 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-02 13:19:41.263037 | orchestrator | Monday 02 June 2025 13:18:31 +0000 (0:00:00.643) 0:01:16.418 *********** 2025-06-02 13:19:41.263048 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263058 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263069 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263080 | orchestrator | 2025-06-02 13:19:41.263091 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-02 13:19:41.263101 | orchestrator | Monday 02 June 2025 13:18:31 +0000 (0:00:00.248) 0:01:16.666 *********** 2025-06-02 13:19:41.263120 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263131 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263141 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263152 | orchestrator | 2025-06-02 13:19:41.263162 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-02 13:19:41.263173 | orchestrator | Monday 02 June 2025 13:18:32 +0000 (0:00:00.620) 0:01:17.287 *********** 2025-06-02 13:19:41.263184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263200 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263211 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263222 | orchestrator | 2025-06-02 13:19:41.263239 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-02 13:19:41.263250 | orchestrator | Monday 02 June 2025 13:18:32 +0000 (0:00:00.335) 0:01:17.622 *********** 2025-06-02 13:19:41.263260 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263270 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263279 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263289 | orchestrator | 2025-06-02 13:19:41.263298 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-02 13:19:41.263308 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.562) 0:01:18.185 *********** 2025-06-02 13:19:41.263317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263327 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263337 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263346 | orchestrator | 2025-06-02 13:19:41.263356 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 13:19:41.263365 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.365) 0:01:18.550 *********** 2025-06-02 13:19:41.263375 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:19:41.263385 | orchestrator | 2025-06-02 13:19:41.263394 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-02 13:19:41.263404 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.476) 0:01:19.026 *********** 2025-06-02 13:19:41.263413 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.263423 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.263433 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.263442 | orchestrator | 2025-06-02 13:19:41.263452 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-02 13:19:41.263462 | orchestrator | Monday 02 June 2025 13:18:34 +0000 (0:00:00.639) 0:01:19.665 *********** 2025-06-02 13:19:41.263471 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.263481 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.263490 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.263500 | orchestrator | 2025-06-02 13:19:41.263509 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-02 13:19:41.263519 | orchestrator | Monday 02 June 2025 13:18:34 +0000 (0:00:00.436) 0:01:20.101 *********** 2025-06-02 13:19:41.263529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263538 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263547 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263557 | orchestrator | 2025-06-02 13:19:41.263566 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-02 13:19:41.263576 | orchestrator | Monday 02 June 2025 13:18:35 +0000 (0:00:00.295) 0:01:20.397 *********** 2025-06-02 13:19:41.263585 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263595 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263604 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263614 | orchestrator | 2025-06-02 13:19:41.263623 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-02 13:19:41.263633 | orchestrator | Monday 02 June 2025 13:18:35 +0000 (0:00:00.308) 0:01:20.705 *********** 2025-06-02 13:19:41.263642 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263658 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263667 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263677 | orchestrator | 2025-06-02 13:19:41.263686 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-02 13:19:41.263696 | orchestrator | Monday 02 June 2025 13:18:35 +0000 (0:00:00.395) 0:01:21.101 *********** 2025-06-02 13:19:41.263706 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263715 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263725 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263734 | orchestrator | 2025-06-02 13:19:41.263744 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-02 13:19:41.263769 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.291) 0:01:21.393 *********** 2025-06-02 13:19:41.263779 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263788 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263798 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263807 | orchestrator | 2025-06-02 13:19:41.263817 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-02 13:19:41.263826 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.352) 0:01:21.745 *********** 2025-06-02 13:19:41.263836 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.263845 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.263855 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.263864 | orchestrator | 2025-06-02 13:19:41.263874 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 13:19:41.263883 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.328) 0:01:22.074 *********** 2025-06-02 13:19:41.263894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.263906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.263935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-06-02 13:19:41 | INFO  | Task f307135e-a536-4863-bfe0-3848766da4dc is in state SUCCESS 2025-06-02 13:19:41.263948 | orchestrator | olla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.263960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.263972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.263982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.263999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264028 | orchestrator | 2025-06-02 13:19:41.264037 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 13:19:41.264047 | orchestrator | Monday 02 June 2025 13:18:38 +0000 (0:00:01.595) 0:01:23.669 *********** 2025-06-02 13:19:41.264057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264163 | orchestrator | 2025-06-02 13:19:41.264172 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 13:19:41.264182 | orchestrator | Monday 02 June 2025 13:18:42 +0000 (0:00:04.273) 0:01:27.943 *********** 2025-06-02 13:19:41.264192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.264301 | orchestrator | 2025-06-02 13:19:41.264311 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 13:19:41.264320 | orchestrator | Monday 02 June 2025 13:18:44 +0000 (0:00:02.014) 0:01:29.958 *********** 2025-06-02 13:19:41.264330 | orchestrator | 2025-06-02 13:19:41.264340 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 13:19:41.264349 | orchestrator | Monday 02 June 2025 13:18:44 +0000 (0:00:00.060) 0:01:30.018 *********** 2025-06-02 13:19:41.264359 | orchestrator | 2025-06-02 13:19:41.264368 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 13:19:41.264377 | orchestrator | Monday 02 June 2025 13:18:44 +0000 (0:00:00.058) 0:01:30.077 *********** 2025-06-02 13:19:41.264387 | orchestrator | 2025-06-02 13:19:41.264396 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 13:19:41.264406 | orchestrator | Monday 02 June 2025 13:18:45 +0000 (0:00:00.059) 0:01:30.136 *********** 2025-06-02 13:19:41.264415 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.264424 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.264434 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.264443 | orchestrator | 2025-06-02 13:19:41.264453 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 13:19:41.264463 | orchestrator | Monday 02 June 2025 13:18:52 +0000 (0:00:07.943) 0:01:38.080 *********** 2025-06-02 13:19:41.264472 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.264481 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.264491 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.264500 | orchestrator | 2025-06-02 13:19:41.264510 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 13:19:41.264520 | orchestrator | Monday 02 June 2025 13:18:55 +0000 (0:00:02.651) 0:01:40.732 *********** 2025-06-02 13:19:41.264529 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.264539 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.264548 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.264557 | orchestrator | 2025-06-02 13:19:41.264567 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 13:19:41.264582 | orchestrator | Monday 02 June 2025 13:19:03 +0000 (0:00:07.493) 0:01:48.225 *********** 2025-06-02 13:19:41.264592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.264601 | orchestrator | 2025-06-02 13:19:41.264610 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 13:19:41.264620 | orchestrator | Monday 02 June 2025 13:19:03 +0000 (0:00:00.123) 0:01:48.349 *********** 2025-06-02 13:19:41.264630 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.264645 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.264655 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.264664 | orchestrator | 2025-06-02 13:19:41.264674 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 13:19:41.264684 | orchestrator | Monday 02 June 2025 13:19:04 +0000 (0:00:00.817) 0:01:49.166 *********** 2025-06-02 13:19:41.264694 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.264703 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.264713 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.264722 | orchestrator | 2025-06-02 13:19:41.264732 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 13:19:41.264741 | orchestrator | Monday 02 June 2025 13:19:04 +0000 (0:00:00.874) 0:01:50.041 *********** 2025-06-02 13:19:41.264764 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.264774 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.264784 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.264793 | orchestrator | 2025-06-02 13:19:41.264803 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 13:19:41.264813 | orchestrator | Monday 02 June 2025 13:19:05 +0000 (0:00:00.835) 0:01:50.877 *********** 2025-06-02 13:19:41.264822 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.264832 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.264841 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.264851 | orchestrator | 2025-06-02 13:19:41.264860 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 13:19:41.264870 | orchestrator | Monday 02 June 2025 13:19:06 +0000 (0:00:00.582) 0:01:51.459 *********** 2025-06-02 13:19:41.264879 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.264889 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.264899 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.264908 | orchestrator | 2025-06-02 13:19:41.264918 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 13:19:41.264927 | orchestrator | Monday 02 June 2025 13:19:07 +0000 (0:00:00.918) 0:01:52.377 *********** 2025-06-02 13:19:41.264937 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.264946 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.264956 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.264965 | orchestrator | 2025-06-02 13:19:41.264975 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-02 13:19:41.264985 | orchestrator | Monday 02 June 2025 13:19:08 +0000 (0:00:01.277) 0:01:53.655 *********** 2025-06-02 13:19:41.264994 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.265003 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.265013 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.265022 | orchestrator | 2025-06-02 13:19:41.265034 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 13:19:41.265051 | orchestrator | Monday 02 June 2025 13:19:08 +0000 (0:00:00.301) 0:01:53.956 *********** 2025-06-02 13:19:41.265062 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265073 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265089 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265138 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265162 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265184 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265204 | orchestrator | 2025-06-02 13:19:41.265214 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 13:19:41.265224 | orchestrator | Monday 02 June 2025 13:19:10 +0000 (0:00:01.394) 0:01:55.351 *********** 2025-06-02 13:19:41.265234 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265244 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265260 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265270 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265344 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265374 | orchestrator | 2025-06-02 13:19:41.265384 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 13:19:41.265393 | orchestrator | Monday 02 June 2025 13:19:14 +0000 (0:00:03.936) 0:01:59.288 *********** 2025-06-02 13:19:41.265403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265420 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265430 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265489 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:19:41.265509 | orchestrator | 2025-06-02 13:19:41.265519 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 13:19:41.265528 | orchestrator | Monday 02 June 2025 13:19:16 +0000 (0:00:02.609) 0:02:01.897 *********** 2025-06-02 13:19:41.265538 | orchestrator | 2025-06-02 13:19:41.265548 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 13:19:41.265557 | orchestrator | Monday 02 June 2025 13:19:16 +0000 (0:00:00.059) 0:02:01.957 *********** 2025-06-02 13:19:41.265567 | orchestrator | 2025-06-02 13:19:41.265576 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 13:19:41.265591 | orchestrator | Monday 02 June 2025 13:19:16 +0000 (0:00:00.065) 0:02:02.022 *********** 2025-06-02 13:19:41.265601 | orchestrator | 2025-06-02 13:19:41.265653 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 13:19:41.265663 | orchestrator | Monday 02 June 2025 13:19:16 +0000 (0:00:00.062) 0:02:02.085 *********** 2025-06-02 13:19:41.265672 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.265682 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.265691 | orchestrator | 2025-06-02 13:19:41.265701 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 13:19:41.265710 | orchestrator | Monday 02 June 2025 13:19:22 +0000 (0:00:06.007) 0:02:08.092 *********** 2025-06-02 13:19:41.265720 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.265729 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.265738 | orchestrator | 2025-06-02 13:19:41.265794 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 13:19:41.265806 | orchestrator | Monday 02 June 2025 13:19:29 +0000 (0:00:06.047) 0:02:14.140 *********** 2025-06-02 13:19:41.265816 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:19:41.265826 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:19:41.265835 | orchestrator | 2025-06-02 13:19:41.265845 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 13:19:41.265855 | orchestrator | Monday 02 June 2025 13:19:35 +0000 (0:00:06.207) 0:02:20.347 *********** 2025-06-02 13:19:41.265864 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:19:41.265874 | orchestrator | 2025-06-02 13:19:41.265884 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 13:19:41.265893 | orchestrator | Monday 02 June 2025 13:19:35 +0000 (0:00:00.117) 0:02:20.465 *********** 2025-06-02 13:19:41.265903 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.265913 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.265923 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.265931 | orchestrator | 2025-06-02 13:19:41.265939 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 13:19:41.265948 | orchestrator | Monday 02 June 2025 13:19:36 +0000 (0:00:01.093) 0:02:21.558 *********** 2025-06-02 13:19:41.265961 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.265988 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.265996 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.266004 | orchestrator | 2025-06-02 13:19:41.266012 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 13:19:41.266156 | orchestrator | Monday 02 June 2025 13:19:37 +0000 (0:00:00.674) 0:02:22.232 *********** 2025-06-02 13:19:41.266167 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.266175 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.266184 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.266191 | orchestrator | 2025-06-02 13:19:41.266199 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 13:19:41.266207 | orchestrator | Monday 02 June 2025 13:19:37 +0000 (0:00:00.842) 0:02:23.075 *********** 2025-06-02 13:19:41.266216 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:19:41.266224 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:19:41.266231 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:19:41.266240 | orchestrator | 2025-06-02 13:19:41.266248 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 13:19:41.266256 | orchestrator | Monday 02 June 2025 13:19:38 +0000 (0:00:00.638) 0:02:23.714 *********** 2025-06-02 13:19:41.266265 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.266274 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.266283 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.266291 | orchestrator | 2025-06-02 13:19:41.266299 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 13:19:41.266308 | orchestrator | Monday 02 June 2025 13:19:39 +0000 (0:00:00.940) 0:02:24.655 *********** 2025-06-02 13:19:41.266324 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:19:41.266332 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:19:41.266341 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:19:41.266349 | orchestrator | 2025-06-02 13:19:41.266357 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:19:41.266371 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 13:19:41.266387 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 13:19:41.266396 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 13:19:41.266404 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:19:41.266412 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:19:41.266420 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:19:41.266428 | orchestrator | 2025-06-02 13:19:41.266436 | orchestrator | 2025-06-02 13:19:41.266443 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:19:41.266451 | orchestrator | Monday 02 June 2025 13:19:40 +0000 (0:00:00.940) 0:02:25.595 *********** 2025-06-02 13:19:41.266459 | orchestrator | =============================================================================== 2025-06-02 13:19:41.266467 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 36.41s 2025-06-02 13:19:41.266475 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.96s 2025-06-02 13:19:41.266482 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.95s 2025-06-02 13:19:41.266490 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.70s 2025-06-02 13:19:41.266498 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.70s 2025-06-02 13:19:41.266506 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.27s 2025-06-02 13:19:41.266513 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.94s 2025-06-02 13:19:41.266521 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.61s 2025-06-02 13:19:41.266529 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.58s 2025-06-02 13:19:41.266536 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.01s 2025-06-02 13:19:41.266544 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.80s 2025-06-02 13:19:41.266552 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.60s 2025-06-02 13:19:41.266560 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.57s 2025-06-02 13:19:41.266567 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.55s 2025-06-02 13:19:41.266575 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.39s 2025-06-02 13:19:41.266583 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.36s 2025-06-02 13:19:41.266590 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.29s 2025-06-02 13:19:41.266598 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.28s 2025-06-02 13:19:41.266606 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.23s 2025-06-02 13:19:41.266614 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.09s 2025-06-02 13:19:41.266622 | orchestrator | 2025-06-02 13:19:41 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:41.266635 | orchestrator | 2025-06-02 13:19:41 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:41.266643 | orchestrator | 2025-06-02 13:19:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:44.313928 | orchestrator | 2025-06-02 13:19:44 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:44.314146 | orchestrator | 2025-06-02 13:19:44 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:44.314165 | orchestrator | 2025-06-02 13:19:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:47.351582 | orchestrator | 2025-06-02 13:19:47 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:47.352350 | orchestrator | 2025-06-02 13:19:47 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:47.352691 | orchestrator | 2025-06-02 13:19:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:50.392264 | orchestrator | 2025-06-02 13:19:50 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:50.393259 | orchestrator | 2025-06-02 13:19:50 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:50.393291 | orchestrator | 2025-06-02 13:19:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:53.427378 | orchestrator | 2025-06-02 13:19:53 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:53.427570 | orchestrator | 2025-06-02 13:19:53 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:53.427591 | orchestrator | 2025-06-02 13:19:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:56.470745 | orchestrator | 2025-06-02 13:19:56 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:56.472575 | orchestrator | 2025-06-02 13:19:56 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:56.472629 | orchestrator | 2025-06-02 13:19:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:19:59.519498 | orchestrator | 2025-06-02 13:19:59 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:19:59.519601 | orchestrator | 2025-06-02 13:19:59 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:19:59.519617 | orchestrator | 2025-06-02 13:19:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:02.567660 | orchestrator | 2025-06-02 13:20:02 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:02.567765 | orchestrator | 2025-06-02 13:20:02 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:02.567781 | orchestrator | 2025-06-02 13:20:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:05.625250 | orchestrator | 2025-06-02 13:20:05 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:05.625468 | orchestrator | 2025-06-02 13:20:05 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:05.625490 | orchestrator | 2025-06-02 13:20:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:08.666533 | orchestrator | 2025-06-02 13:20:08 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:08.671106 | orchestrator | 2025-06-02 13:20:08 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:08.671148 | orchestrator | 2025-06-02 13:20:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:11.706434 | orchestrator | 2025-06-02 13:20:11 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:11.706539 | orchestrator | 2025-06-02 13:20:11 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:11.706553 | orchestrator | 2025-06-02 13:20:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:14.743420 | orchestrator | 2025-06-02 13:20:14 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:14.744773 | orchestrator | 2025-06-02 13:20:14 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:14.744866 | orchestrator | 2025-06-02 13:20:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:17.792331 | orchestrator | 2025-06-02 13:20:17 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:17.796034 | orchestrator | 2025-06-02 13:20:17 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:17.796077 | orchestrator | 2025-06-02 13:20:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:20.835284 | orchestrator | 2025-06-02 13:20:20 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:20.835863 | orchestrator | 2025-06-02 13:20:20 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:20.835895 | orchestrator | 2025-06-02 13:20:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:23.875391 | orchestrator | 2025-06-02 13:20:23 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:23.877959 | orchestrator | 2025-06-02 13:20:23 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:23.877992 | orchestrator | 2025-06-02 13:20:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:26.927799 | orchestrator | 2025-06-02 13:20:26 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:26.930889 | orchestrator | 2025-06-02 13:20:26 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:26.930934 | orchestrator | 2025-06-02 13:20:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:29.978314 | orchestrator | 2025-06-02 13:20:29 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:29.979376 | orchestrator | 2025-06-02 13:20:29 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:29.979749 | orchestrator | 2025-06-02 13:20:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:33.036971 | orchestrator | 2025-06-02 13:20:33 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:33.037099 | orchestrator | 2025-06-02 13:20:33 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:33.037114 | orchestrator | 2025-06-02 13:20:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:36.082868 | orchestrator | 2025-06-02 13:20:36 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:36.087412 | orchestrator | 2025-06-02 13:20:36 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:36.087536 | orchestrator | 2025-06-02 13:20:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:39.139179 | orchestrator | 2025-06-02 13:20:39 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:39.140143 | orchestrator | 2025-06-02 13:20:39 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:39.140361 | orchestrator | 2025-06-02 13:20:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:42.185949 | orchestrator | 2025-06-02 13:20:42 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:42.188428 | orchestrator | 2025-06-02 13:20:42 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:42.188527 | orchestrator | 2025-06-02 13:20:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:45.235973 | orchestrator | 2025-06-02 13:20:45 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:45.236416 | orchestrator | 2025-06-02 13:20:45 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:45.237505 | orchestrator | 2025-06-02 13:20:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:48.282394 | orchestrator | 2025-06-02 13:20:48 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:48.282498 | orchestrator | 2025-06-02 13:20:48 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:48.282514 | orchestrator | 2025-06-02 13:20:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:51.325657 | orchestrator | 2025-06-02 13:20:51 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:51.325777 | orchestrator | 2025-06-02 13:20:51 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:51.325796 | orchestrator | 2025-06-02 13:20:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:54.375049 | orchestrator | 2025-06-02 13:20:54 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:54.381345 | orchestrator | 2025-06-02 13:20:54 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:54.381378 | orchestrator | 2025-06-02 13:20:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:20:57.435231 | orchestrator | 2025-06-02 13:20:57 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:20:57.437459 | orchestrator | 2025-06-02 13:20:57 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:20:57.437493 | orchestrator | 2025-06-02 13:20:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:00.486282 | orchestrator | 2025-06-02 13:21:00 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:00.486724 | orchestrator | 2025-06-02 13:21:00 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:00.486754 | orchestrator | 2025-06-02 13:21:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:03.533411 | orchestrator | 2025-06-02 13:21:03 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:03.533963 | orchestrator | 2025-06-02 13:21:03 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:03.534007 | orchestrator | 2025-06-02 13:21:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:06.581278 | orchestrator | 2025-06-02 13:21:06 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:06.584281 | orchestrator | 2025-06-02 13:21:06 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:06.584338 | orchestrator | 2025-06-02 13:21:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:09.629771 | orchestrator | 2025-06-02 13:21:09 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:09.630004 | orchestrator | 2025-06-02 13:21:09 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:09.630080 | orchestrator | 2025-06-02 13:21:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:12.662068 | orchestrator | 2025-06-02 13:21:12 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:12.664683 | orchestrator | 2025-06-02 13:21:12 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:12.664714 | orchestrator | 2025-06-02 13:21:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:15.713392 | orchestrator | 2025-06-02 13:21:15 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:15.715792 | orchestrator | 2025-06-02 13:21:15 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:15.715875 | orchestrator | 2025-06-02 13:21:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:18.765834 | orchestrator | 2025-06-02 13:21:18 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:18.768673 | orchestrator | 2025-06-02 13:21:18 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:18.768716 | orchestrator | 2025-06-02 13:21:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:21.824871 | orchestrator | 2025-06-02 13:21:21 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:21.826631 | orchestrator | 2025-06-02 13:21:21 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:21.826693 | orchestrator | 2025-06-02 13:21:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:24.882727 | orchestrator | 2025-06-02 13:21:24 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:24.885205 | orchestrator | 2025-06-02 13:21:24 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:24.885239 | orchestrator | 2025-06-02 13:21:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:27.932466 | orchestrator | 2025-06-02 13:21:27 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:27.934493 | orchestrator | 2025-06-02 13:21:27 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:27.934527 | orchestrator | 2025-06-02 13:21:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:30.984395 | orchestrator | 2025-06-02 13:21:30 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:30.986592 | orchestrator | 2025-06-02 13:21:30 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:30.986683 | orchestrator | 2025-06-02 13:21:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:34.033216 | orchestrator | 2025-06-02 13:21:34 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:34.035215 | orchestrator | 2025-06-02 13:21:34 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:34.035676 | orchestrator | 2025-06-02 13:21:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:37.077047 | orchestrator | 2025-06-02 13:21:37 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:37.077593 | orchestrator | 2025-06-02 13:21:37 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:37.077628 | orchestrator | 2025-06-02 13:21:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:40.122619 | orchestrator | 2025-06-02 13:21:40 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:40.123525 | orchestrator | 2025-06-02 13:21:40 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:40.123897 | orchestrator | 2025-06-02 13:21:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:43.171982 | orchestrator | 2025-06-02 13:21:43 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:43.172087 | orchestrator | 2025-06-02 13:21:43 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:43.174863 | orchestrator | 2025-06-02 13:21:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:46.209840 | orchestrator | 2025-06-02 13:21:46 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:46.211785 | orchestrator | 2025-06-02 13:21:46 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:46.211820 | orchestrator | 2025-06-02 13:21:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:49.253253 | orchestrator | 2025-06-02 13:21:49 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:49.256014 | orchestrator | 2025-06-02 13:21:49 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:49.256054 | orchestrator | 2025-06-02 13:21:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:52.309178 | orchestrator | 2025-06-02 13:21:52 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:52.311615 | orchestrator | 2025-06-02 13:21:52 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:52.311655 | orchestrator | 2025-06-02 13:21:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:55.358318 | orchestrator | 2025-06-02 13:21:55 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:55.364083 | orchestrator | 2025-06-02 13:21:55 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:55.364151 | orchestrator | 2025-06-02 13:21:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:21:58.417461 | orchestrator | 2025-06-02 13:21:58 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:21:58.417573 | orchestrator | 2025-06-02 13:21:58 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:21:58.418607 | orchestrator | 2025-06-02 13:21:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:01.474615 | orchestrator | 2025-06-02 13:22:01 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:01.476760 | orchestrator | 2025-06-02 13:22:01 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:22:01.477068 | orchestrator | 2025-06-02 13:22:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:04.527185 | orchestrator | 2025-06-02 13:22:04 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:04.530123 | orchestrator | 2025-06-02 13:22:04 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:22:04.530167 | orchestrator | 2025-06-02 13:22:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:07.573590 | orchestrator | 2025-06-02 13:22:07 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:07.573690 | orchestrator | 2025-06-02 13:22:07 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:22:07.573736 | orchestrator | 2025-06-02 13:22:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:10.622277 | orchestrator | 2025-06-02 13:22:10 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:10.622388 | orchestrator | 2025-06-02 13:22:10 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state STARTED 2025-06-02 13:22:10.622403 | orchestrator | 2025-06-02 13:22:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:13.672573 | orchestrator | 2025-06-02 13:22:13 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:13.681169 | orchestrator | 2025-06-02 13:22:13 | INFO  | Task 2de266a9-4b2b-4fce-a121-204cde71d6e4 is in state SUCCESS 2025-06-02 13:22:13.682112 | orchestrator | 2025-06-02 13:22:13.684220 | orchestrator | 2025-06-02 13:22:13.684282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:22:13.684296 | orchestrator | 2025-06-02 13:22:13.684308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:22:13.684319 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.414) 0:00:00.414 *********** 2025-06-02 13:22:13.684330 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.684725 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.684738 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.684749 | orchestrator | 2025-06-02 13:22:13.684760 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:22:13.684771 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.514) 0:00:00.929 *********** 2025-06-02 13:22:13.684782 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-02 13:22:13.684793 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-02 13:22:13.684804 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-02 13:22:13.684815 | orchestrator | 2025-06-02 13:22:13.684827 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-02 13:22:13.684839 | orchestrator | 2025-06-02 13:22:13.684869 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 13:22:13.684881 | orchestrator | Monday 02 June 2025 13:16:12 +0000 (0:00:00.737) 0:00:01.667 *********** 2025-06-02 13:22:13.684894 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.684907 | orchestrator | 2025-06-02 13:22:13.684919 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-02 13:22:13.684932 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:00.883) 0:00:02.551 *********** 2025-06-02 13:22:13.684944 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.684982 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.684995 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.685019 | orchestrator | 2025-06-02 13:22:13.685032 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 13:22:13.685044 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:00.744) 0:00:03.296 *********** 2025-06-02 13:22:13.685056 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.685068 | orchestrator | 2025-06-02 13:22:13.685333 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-02 13:22:13.685348 | orchestrator | Monday 02 June 2025 13:16:15 +0000 (0:00:01.290) 0:00:04.586 *********** 2025-06-02 13:22:13.685362 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.685373 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.685383 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.685394 | orchestrator | 2025-06-02 13:22:13.685405 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-02 13:22:13.685415 | orchestrator | Monday 02 June 2025 13:16:15 +0000 (0:00:00.855) 0:00:05.442 *********** 2025-06-02 13:22:13.685426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 13:22:13.685462 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 13:22:13.685473 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 13:22:13.685484 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 13:22:13.685494 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 13:22:13.685505 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 13:22:13.685517 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 13:22:13.685527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 13:22:13.685538 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 13:22:13.685548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 13:22:13.685559 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 13:22:13.685570 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 13:22:13.685580 | orchestrator | 2025-06-02 13:22:13.685591 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 13:22:13.685601 | orchestrator | Monday 02 June 2025 13:16:19 +0000 (0:00:03.535) 0:00:08.978 *********** 2025-06-02 13:22:13.685612 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 13:22:13.685623 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 13:22:13.685634 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 13:22:13.685645 | orchestrator | 2025-06-02 13:22:13.685656 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 13:22:13.685667 | orchestrator | Monday 02 June 2025 13:16:20 +0000 (0:00:00.915) 0:00:09.893 *********** 2025-06-02 13:22:13.685678 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 13:22:13.685689 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 13:22:13.685699 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 13:22:13.685746 | orchestrator | 2025-06-02 13:22:13.686202 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 13:22:13.686220 | orchestrator | Monday 02 June 2025 13:16:21 +0000 (0:00:01.533) 0:00:11.427 *********** 2025-06-02 13:22:13.686231 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-02 13:22:13.686242 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.686271 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-02 13:22:13.686283 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.686294 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-02 13:22:13.686304 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.686315 | orchestrator | 2025-06-02 13:22:13.686326 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-02 13:22:13.686337 | orchestrator | Monday 02 June 2025 13:16:23 +0000 (0:00:01.196) 0:00:12.623 *********** 2025-06-02 13:22:13.686359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.686379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.686416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.686428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.686440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.686460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.686472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.686738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.686768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.686779 | orchestrator | 2025-06-02 13:22:13.686791 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-02 13:22:13.686802 | orchestrator | Monday 02 June 2025 13:16:24 +0000 (0:00:01.689) 0:00:14.313 *********** 2025-06-02 13:22:13.686876 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.686891 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.686901 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.686912 | orchestrator | 2025-06-02 13:22:13.686924 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-02 13:22:13.686935 | orchestrator | Monday 02 June 2025 13:16:26 +0000 (0:00:01.921) 0:00:16.234 *********** 2025-06-02 13:22:13.686946 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-02 13:22:13.686982 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-02 13:22:13.686994 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-02 13:22:13.687005 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-02 13:22:13.687026 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-02 13:22:13.687037 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-02 13:22:13.687048 | orchestrator | 2025-06-02 13:22:13.687058 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-02 13:22:13.687069 | orchestrator | Monday 02 June 2025 13:16:29 +0000 (0:00:02.529) 0:00:18.764 *********** 2025-06-02 13:22:13.687079 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.687090 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.687101 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.687275 | orchestrator | 2025-06-02 13:22:13.687435 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-02 13:22:13.687447 | orchestrator | Monday 02 June 2025 13:16:30 +0000 (0:00:01.552) 0:00:20.317 *********** 2025-06-02 13:22:13.687458 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.687469 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.687480 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.687491 | orchestrator | 2025-06-02 13:22:13.687502 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-02 13:22:13.687513 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:01.432) 0:00:21.749 *********** 2025-06-02 13:22:13.687524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.687547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.687575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.687620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 13:22:13.687654 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.687667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.687679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.687690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.687708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.687728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.687745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.687757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 13:22:13.687768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 13:22:13.687779 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.687790 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.687801 | orchestrator | 2025-06-02 13:22:13.687812 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-02 13:22:13.687823 | orchestrator | Monday 02 June 2025 13:16:32 +0000 (0:00:00.505) 0:00:22.255 *********** 2025-06-02 13:22:13.687834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.687859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.688529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 13:22:13.688540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.688586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 13:22:13.688602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.688623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19', '__omit_place_holder__6095bd4e6a5d68e54ff8fdd519db2993d8406c19'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 13:22:13.688634 | orchestrator | 2025-06-02 13:22:13.688644 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-02 13:22:13.688654 | orchestrator | Monday 02 June 2025 13:16:36 +0000 (0:00:04.160) 0:00:26.416 *********** 2025-06-02 13:22:13.688664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.688750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.688760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.688776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.688786 | orchestrator | 2025-06-02 13:22:13.688796 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-02 13:22:13.688806 | orchestrator | Monday 02 June 2025 13:16:40 +0000 (0:00:03.409) 0:00:29.825 *********** 2025-06-02 13:22:13.688815 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 13:22:13.688830 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 13:22:13.688840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 13:22:13.688850 | orchestrator | 2025-06-02 13:22:13.688860 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-02 13:22:13.688869 | orchestrator | Monday 02 June 2025 13:16:42 +0000 (0:00:01.744) 0:00:31.569 *********** 2025-06-02 13:22:13.688879 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 13:22:13.688889 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 13:22:13.688898 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 13:22:13.688907 | orchestrator | 2025-06-02 13:22:13.688929 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-02 13:22:13.688939 | orchestrator | Monday 02 June 2025 13:16:46 +0000 (0:00:04.017) 0:00:35.587 *********** 2025-06-02 13:22:13.688948 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.688984 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.689124 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.689136 | orchestrator | 2025-06-02 13:22:13.689245 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-02 13:22:13.689255 | orchestrator | Monday 02 June 2025 13:16:47 +0000 (0:00:01.284) 0:00:36.872 *********** 2025-06-02 13:22:13.689265 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 13:22:13.689276 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 13:22:13.689286 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 13:22:13.689296 | orchestrator | 2025-06-02 13:22:13.689305 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-02 13:22:13.689314 | orchestrator | Monday 02 June 2025 13:16:49 +0000 (0:00:02.173) 0:00:39.045 *********** 2025-06-02 13:22:13.689324 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 13:22:13.689334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 13:22:13.689343 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 13:22:13.689361 | orchestrator | 2025-06-02 13:22:13.689370 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-02 13:22:13.689380 | orchestrator | Monday 02 June 2025 13:16:51 +0000 (0:00:01.750) 0:00:40.796 *********** 2025-06-02 13:22:13.689389 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-02 13:22:13.689399 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-02 13:22:13.689409 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-02 13:22:13.689418 | orchestrator | 2025-06-02 13:22:13.689428 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-02 13:22:13.689437 | orchestrator | Monday 02 June 2025 13:16:52 +0000 (0:00:01.254) 0:00:42.051 *********** 2025-06-02 13:22:13.689474 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-02 13:22:13.689484 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-02 13:22:13.689494 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-02 13:22:13.689503 | orchestrator | 2025-06-02 13:22:13.689513 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 13:22:13.689522 | orchestrator | Monday 02 June 2025 13:16:54 +0000 (0:00:01.605) 0:00:43.656 *********** 2025-06-02 13:22:13.689531 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.689541 | orchestrator | 2025-06-02 13:22:13.689550 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-02 13:22:13.689593 | orchestrator | Monday 02 June 2025 13:16:55 +0000 (0:00:00.808) 0:00:44.465 *********** 2025-06-02 13:22:13.689618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.689928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.690012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.690071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.690091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.690102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.690112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.690122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.690142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.690152 | orchestrator | 2025-06-02 13:22:13.690162 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-02 13:22:13.690182 | orchestrator | Monday 02 June 2025 13:16:58 +0000 (0:00:03.111) 0:00:47.576 *********** 2025-06-02 13:22:13.690198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.690214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.690225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.690235 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.690290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.690334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.690354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.690370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.690398 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.690409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.690419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.690429 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.690439 | orchestrator | 2025-06-02 13:22:13.690449 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-02 13:22:13.690666 | orchestrator | Monday 02 June 2025 13:16:58 +0000 (0:00:00.580) 0:00:48.157 *********** 2025-06-02 13:22:13.690685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.690696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.690715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.690725 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.690791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.690816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.690827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.690837 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.690847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.690857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.690867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.690877 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.690887 | orchestrator | 2025-06-02 13:22:13.690896 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 13:22:13.690906 | orchestrator | Monday 02 June 2025 13:17:00 +0000 (0:00:01.700) 0:00:49.857 *********** 2025-06-02 13:22:13.690924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.690945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.690982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.690999 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.691017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.691034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.691061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.691351 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.691375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.691400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.691416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.691426 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.691435 | orchestrator | 2025-06-02 13:22:13.691445 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 13:22:13.691455 | orchestrator | Monday 02 June 2025 13:17:01 +0000 (0:00:00.754) 0:00:50.612 *********** 2025-06-02 13:22:13.691495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.691522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.691533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.691543 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.691553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.691580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.691596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.691606 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.691616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.691626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.691636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.691646 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.691655 | orchestrator | 2025-06-02 13:22:13.691665 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 13:22:13.691675 | orchestrator | Monday 02 June 2025 13:17:01 +0000 (0:00:00.569) 0:00:51.181 *********** 2025-06-02 13:22:13.691684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.691708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.691724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.691734 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.691744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.691754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.691925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.691942 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.692081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.692143 | orchestrator | 2025-06-02 13:22:13.692154 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-02 13:22:13.692165 | orchestrator | Monday 02 June 2025 13:17:03 +0000 (0:00:01.378) 0:00:52.560 *********** 2025-06-02 13:22:13.692181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692215 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.692226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.692290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692323 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.692334 | orchestrator | 2025-06-02 13:22:13.692345 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-02 13:22:13.692356 | orchestrator | Monday 02 June 2025 13:17:04 +0000 (0:00:01.570) 0:00:54.131 *********** 2025-06-02 13:22:13.692403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692469 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.692485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692700 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.692717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.692761 | orchestrator | 2025-06-02 13:22:13.692770 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-02 13:22:13.692784 | orchestrator | Monday 02 June 2025 13:17:05 +0000 (0:00:00.922) 0:00:55.053 *********** 2025-06-02 13:22:13.692793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692825 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.692834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692866 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.692880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 13:22:13.692896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 13:22:13.692931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 13:22:13.692953 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.692983 | orchestrator | 2025-06-02 13:22:13.692991 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-02 13:22:13.692999 | orchestrator | Monday 02 June 2025 13:17:06 +0000 (0:00:01.321) 0:00:56.375 *********** 2025-06-02 13:22:13.693013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 13:22:13.693021 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 13:22:13.693029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 13:22:13.693037 | orchestrator | 2025-06-02 13:22:13.693045 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-02 13:22:13.693053 | orchestrator | Monday 02 June 2025 13:17:08 +0000 (0:00:01.287) 0:00:57.662 *********** 2025-06-02 13:22:13.693061 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 13:22:13.693069 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 13:22:13.693077 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 13:22:13.693084 | orchestrator | 2025-06-02 13:22:13.693100 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-02 13:22:13.693108 | orchestrator | Monday 02 June 2025 13:17:09 +0000 (0:00:01.354) 0:00:59.017 *********** 2025-06-02 13:22:13.693116 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 13:22:13.693124 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 13:22:13.693132 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 13:22:13.693139 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 13:22:13.693147 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.693155 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 13:22:13.693163 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.693170 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 13:22:13.693528 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.693548 | orchestrator | 2025-06-02 13:22:13.693557 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-02 13:22:13.693565 | orchestrator | Monday 02 June 2025 13:17:10 +0000 (0:00:00.886) 0:00:59.904 *********** 2025-06-02 13:22:13.693581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.693596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.693606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 13:22:13.693622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.693632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.693641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 13:22:13.693650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.693691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.693723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 13:22:13.693738 | orchestrator | 2025-06-02 13:22:13.693747 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-02 13:22:13.693756 | orchestrator | Monday 02 June 2025 13:17:12 +0000 (0:00:02.460) 0:01:02.364 *********** 2025-06-02 13:22:13.693765 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.693773 | orchestrator | 2025-06-02 13:22:13.693782 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-02 13:22:13.693791 | orchestrator | Monday 02 June 2025 13:17:13 +0000 (0:00:00.691) 0:01:03.056 *********** 2025-06-02 13:22:13.693800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 13:22:13.693810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.693820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.693834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 13:22:13.693843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.693871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.693881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.693890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 13:22:13.693899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.693908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.693923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694249 | orchestrator | 2025-06-02 13:22:13.694258 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-02 13:22:13.694267 | orchestrator | Monday 02 June 2025 13:17:17 +0000 (0:00:03.939) 0:01:06.995 *********** 2025-06-02 13:22:13.694276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 13:22:13.694285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.694295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694313 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.694330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 13:22:13.694347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.694356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 13:22:13.694384 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.694393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.694412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.694450 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.694459 | orchestrator | 2025-06-02 13:22:13.694468 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-02 13:22:13.694477 | orchestrator | Monday 02 June 2025 13:17:18 +0000 (0:00:00.667) 0:01:07.663 *********** 2025-06-02 13:22:13.694486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 13:22:13.694496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 13:22:13.694505 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.694513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 13:22:13.694521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 13:22:13.694529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.694536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 13:22:13.694544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 13:22:13.694552 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.694560 | orchestrator | 2025-06-02 13:22:13.694568 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-02 13:22:13.694576 | orchestrator | Monday 02 June 2025 13:17:19 +0000 (0:00:01.016) 0:01:08.679 *********** 2025-06-02 13:22:13.694583 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.694591 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.694599 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.694606 | orchestrator | 2025-06-02 13:22:13.694614 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-02 13:22:13.694621 | orchestrator | Monday 02 June 2025 13:17:20 +0000 (0:00:01.154) 0:01:09.834 *********** 2025-06-02 13:22:13.694629 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.694637 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.694661 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.694670 | orchestrator | 2025-06-02 13:22:13.694678 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-02 13:22:13.694687 | orchestrator | Monday 02 June 2025 13:17:22 +0000 (0:00:01.791) 0:01:11.625 *********** 2025-06-02 13:22:13.694696 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.694704 | orchestrator | 2025-06-02 13:22:13.694713 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-02 13:22:13.694810 | orchestrator | Monday 02 June 2025 13:17:22 +0000 (0:00:00.610) 0:01:12.235 *********** 2025-06-02 13:22:13.695021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.695042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.695069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.695089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695127 | orchestrator | 2025-06-02 13:22:13.695135 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-02 13:22:13.695143 | orchestrator | Monday 02 June 2025 13:17:26 +0000 (0:00:03.733) 0:01:15.969 *********** 2025-06-02 13:22:13.695151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.695164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695187 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.695199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.695208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695229 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.695237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.695249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.695266 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.695273 | orchestrator | 2025-06-02 13:22:13.695281 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-02 13:22:13.695293 | orchestrator | Monday 02 June 2025 13:17:27 +0000 (0:00:00.571) 0:01:16.541 *********** 2025-06-02 13:22:13.695301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 13:22:13.695309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 13:22:13.695317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 13:22:13.695326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 13:22:13.695334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 13:22:13.695346 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.695353 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.695360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 13:22:13.695366 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.695373 | orchestrator | 2025-06-02 13:22:13.695379 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-02 13:22:13.695386 | orchestrator | Monday 02 June 2025 13:17:27 +0000 (0:00:00.908) 0:01:17.449 *********** 2025-06-02 13:22:13.695392 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.695399 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.695406 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.695412 | orchestrator | 2025-06-02 13:22:13.695419 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-02 13:22:13.695425 | orchestrator | Monday 02 June 2025 13:17:30 +0000 (0:00:02.216) 0:01:19.666 *********** 2025-06-02 13:22:13.695432 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.695438 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.695445 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.695459 | orchestrator | 2025-06-02 13:22:13.695466 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-02 13:22:13.695472 | orchestrator | Monday 02 June 2025 13:17:31 +0000 (0:00:01.770) 0:01:21.437 *********** 2025-06-02 13:22:13.695479 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.695485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.695492 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.695498 | orchestrator | 2025-06-02 13:22:13.695505 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-02 13:22:13.695511 | orchestrator | Monday 02 June 2025 13:17:32 +0000 (0:00:00.268) 0:01:21.705 *********** 2025-06-02 13:22:13.695518 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.695561 | orchestrator | 2025-06-02 13:22:13.695570 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-02 13:22:13.695576 | orchestrator | Monday 02 June 2025 13:17:32 +0000 (0:00:00.579) 0:01:22.285 *********** 2025-06-02 13:22:13.695626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 13:22:13.695640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 13:22:13.695653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 13:22:13.695660 | orchestrator | 2025-06-02 13:22:13.695667 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-02 13:22:13.695674 | orchestrator | Monday 02 June 2025 13:17:35 +0000 (0:00:02.419) 0:01:24.705 *********** 2025-06-02 13:22:13.695681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 13:22:13.695687 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.695694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 13:22:13.695709 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.695721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 13:22:13.695728 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.695735 | orchestrator | 2025-06-02 13:22:13.695746 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-02 13:22:13.695753 | orchestrator | Monday 02 June 2025 13:17:36 +0000 (0:00:01.360) 0:01:26.065 *********** 2025-06-02 13:22:13.695765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 13:22:13.695773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 13:22:13.695782 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.695857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 13:22:13.695865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 13:22:13.695872 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.695879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 13:22:13.695908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 13:22:13.695915 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.695922 | orchestrator | 2025-06-02 13:22:13.695929 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-02 13:22:13.695936 | orchestrator | Monday 02 June 2025 13:17:38 +0000 (0:00:01.604) 0:01:27.670 *********** 2025-06-02 13:22:13.695942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.695949 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.695971 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.695978 | orchestrator | 2025-06-02 13:22:13.695984 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-02 13:22:13.695991 | orchestrator | Monday 02 June 2025 13:17:38 +0000 (0:00:00.662) 0:01:28.332 *********** 2025-06-02 13:22:13.695997 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.696025 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.696043 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.696050 | orchestrator | 2025-06-02 13:22:13.696057 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-02 13:22:13.696120 | orchestrator | Monday 02 June 2025 13:17:39 +0000 (0:00:00.905) 0:01:29.238 *********** 2025-06-02 13:22:13.696129 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.696183 | orchestrator | 2025-06-02 13:22:13.696190 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-02 13:22:13.696197 | orchestrator | Monday 02 June 2025 13:17:40 +0000 (0:00:00.757) 0:01:29.996 *********** 2025-06-02 13:22:13.696209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.696218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.696263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.696271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696325 | orchestrator | 2025-06-02 13:22:13.696332 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-02 13:22:13.696338 | orchestrator | Monday 02 June 2025 13:17:43 +0000 (0:00:02.921) 0:01:32.917 *********** 2025-06-02 13:22:13.696345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.696352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696382 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.696392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.696400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696425 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.696436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.696446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.696474 | orchestrator | 2025-06-02 13:22:13.696481 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-02 13:22:13.696487 | orchestrator | Monday 02 June 2025 13:17:44 +0000 (0:00:00.932) 0:01:33.850 *********** 2025-06-02 13:22:13.696499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 13:22:13.696506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 13:22:13.696513 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.696520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 13:22:13.696527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 13:22:13.696533 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.696544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 13:22:13.696551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 13:22:13.696558 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.696565 | orchestrator | 2025-06-02 13:22:13.696571 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-02 13:22:13.696578 | orchestrator | Monday 02 June 2025 13:17:45 +0000 (0:00:00.933) 0:01:34.784 *********** 2025-06-02 13:22:13.696584 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.696591 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.696597 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.696604 | orchestrator | 2025-06-02 13:22:13.696610 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-02 13:22:13.696617 | orchestrator | Monday 02 June 2025 13:17:46 +0000 (0:00:01.351) 0:01:36.136 *********** 2025-06-02 13:22:13.696623 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.696630 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.696640 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.696646 | orchestrator | 2025-06-02 13:22:13.696653 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-02 13:22:13.696660 | orchestrator | Monday 02 June 2025 13:17:48 +0000 (0:00:01.934) 0:01:38.070 *********** 2025-06-02 13:22:13.696666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.696673 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.696679 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.696686 | orchestrator | 2025-06-02 13:22:13.696692 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-02 13:22:13.696699 | orchestrator | Monday 02 June 2025 13:17:49 +0000 (0:00:00.454) 0:01:38.525 *********** 2025-06-02 13:22:13.696705 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.696712 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.696719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.696725 | orchestrator | 2025-06-02 13:22:13.696732 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-02 13:22:13.696738 | orchestrator | Monday 02 June 2025 13:17:49 +0000 (0:00:00.299) 0:01:38.824 *********** 2025-06-02 13:22:13.696745 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.696751 | orchestrator | 2025-06-02 13:22:13.696758 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-02 13:22:13.696765 | orchestrator | Monday 02 June 2025 13:17:50 +0000 (0:00:00.763) 0:01:39.588 *********** 2025-06-02 13:22:13.696776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:22:13.696784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:22:13.696791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:22:13.696848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:22:13.696860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:22:13.696915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:22:13.696926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.696983 | orchestrator | 2025-06-02 13:22:13.696990 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-02 13:22:13.696997 | orchestrator | Monday 02 June 2025 13:17:55 +0000 (0:00:05.686) 0:01:45.275 *********** 2025-06-02 13:22:13.697007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:22:13.697018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:22:13.697025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697064 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.697076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:22:13.697087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:22:13.697098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697137 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.697147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:22:13.697158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:22:13.697165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.697212 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.697219 | orchestrator | 2025-06-02 13:22:13.697225 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-02 13:22:13.697232 | orchestrator | Monday 02 June 2025 13:17:56 +0000 (0:00:01.074) 0:01:46.349 *********** 2025-06-02 13:22:13.697239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 13:22:13.697246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 13:22:13.697253 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.697259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 13:22:13.697266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 13:22:13.697273 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.697279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 13:22:13.697286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 13:22:13.697293 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.697300 | orchestrator | 2025-06-02 13:22:13.697306 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-02 13:22:13.697313 | orchestrator | Monday 02 June 2025 13:17:57 +0000 (0:00:00.948) 0:01:47.297 *********** 2025-06-02 13:22:13.697319 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.697326 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.697332 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.697339 | orchestrator | 2025-06-02 13:22:13.697345 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-02 13:22:13.697352 | orchestrator | Monday 02 June 2025 13:17:59 +0000 (0:00:01.699) 0:01:48.996 *********** 2025-06-02 13:22:13.697359 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.697365 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.697372 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.697378 | orchestrator | 2025-06-02 13:22:13.697385 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-02 13:22:13.697391 | orchestrator | Monday 02 June 2025 13:18:01 +0000 (0:00:01.980) 0:01:50.977 *********** 2025-06-02 13:22:13.697398 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.697404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.697411 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.697417 | orchestrator | 2025-06-02 13:22:13.697424 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-02 13:22:13.697430 | orchestrator | Monday 02 June 2025 13:18:01 +0000 (0:00:00.345) 0:01:51.323 *********** 2025-06-02 13:22:13.697437 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.697451 | orchestrator | 2025-06-02 13:22:13.697458 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-02 13:22:13.697464 | orchestrator | Monday 02 June 2025 13:18:02 +0000 (0:00:00.820) 0:01:52.143 *********** 2025-06-02 13:22:13.697478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:22:13.697488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.697899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:22:13.697929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.697945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:22:13.698343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.698360 | orchestrator | 2025-06-02 13:22:13.698368 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-02 13:22:13.698376 | orchestrator | Monday 02 June 2025 13:18:07 +0000 (0:00:05.051) 0:01:57.195 *********** 2025-06-02 13:22:13.698393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:22:13.698415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.698423 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.698431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:22:13.698488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.698499 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.698507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:22:13.698531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.698540 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.698548 | orchestrator | 2025-06-02 13:22:13.698555 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-02 13:22:13.698562 | orchestrator | Monday 02 June 2025 13:18:10 +0000 (0:00:02.703) 0:01:59.898 *********** 2025-06-02 13:22:13.698570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 13:22:13.698578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 13:22:13.698585 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.698592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 13:22:13.698634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 13:22:13.698683 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.698693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 13:22:13.698717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 13:22:13.698725 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.698732 | orchestrator | 2025-06-02 13:22:13.698739 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-02 13:22:13.698746 | orchestrator | Monday 02 June 2025 13:18:13 +0000 (0:00:03.067) 0:02:02.966 *********** 2025-06-02 13:22:13.698753 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.698761 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.698768 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.698774 | orchestrator | 2025-06-02 13:22:13.698786 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-02 13:22:13.698793 | orchestrator | Monday 02 June 2025 13:18:15 +0000 (0:00:01.513) 0:02:04.479 *********** 2025-06-02 13:22:13.698801 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.698808 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.698814 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.698821 | orchestrator | 2025-06-02 13:22:13.698828 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-02 13:22:13.698836 | orchestrator | Monday 02 June 2025 13:18:16 +0000 (0:00:01.884) 0:02:06.364 *********** 2025-06-02 13:22:13.698843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.698849 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.698855 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.698861 | orchestrator | 2025-06-02 13:22:13.698867 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-02 13:22:13.698873 | orchestrator | Monday 02 June 2025 13:18:17 +0000 (0:00:00.248) 0:02:06.613 *********** 2025-06-02 13:22:13.698879 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.698885 | orchestrator | 2025-06-02 13:22:13.698891 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-02 13:22:13.698897 | orchestrator | Monday 02 June 2025 13:18:17 +0000 (0:00:00.667) 0:02:07.281 *********** 2025-06-02 13:22:13.698904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:22:13.698916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:22:13.698923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:22:13.698929 | orchestrator | 2025-06-02 13:22:13.698935 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-02 13:22:13.698941 | orchestrator | Monday 02 June 2025 13:18:20 +0000 (0:00:03.125) 0:02:10.407 *********** 2025-06-02 13:22:13.698952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:22:13.698973 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.698983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:22:13.698990 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.698996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:22:13.699007 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.699013 | orchestrator | 2025-06-02 13:22:13.699019 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-02 13:22:13.699025 | orchestrator | Monday 02 June 2025 13:18:21 +0000 (0:00:00.372) 0:02:10.780 *********** 2025-06-02 13:22:13.699031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 13:22:13.699038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 13:22:13.699045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 13:22:13.699051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 13:22:13.699058 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.699064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.699070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 13:22:13.699076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 13:22:13.699082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.699088 | orchestrator | 2025-06-02 13:22:13.699094 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-02 13:22:13.699100 | orchestrator | Monday 02 June 2025 13:18:21 +0000 (0:00:00.668) 0:02:11.449 *********** 2025-06-02 13:22:13.699106 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.699113 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.699119 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.699125 | orchestrator | 2025-06-02 13:22:13.699131 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-02 13:22:13.699137 | orchestrator | Monday 02 June 2025 13:18:23 +0000 (0:00:01.349) 0:02:12.798 *********** 2025-06-02 13:22:13.699143 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.699149 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.699155 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.699161 | orchestrator | 2025-06-02 13:22:13.699171 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-02 13:22:13.699178 | orchestrator | Monday 02 June 2025 13:18:24 +0000 (0:00:01.606) 0:02:14.405 *********** 2025-06-02 13:22:13.699184 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.699190 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.699196 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.699202 | orchestrator | 2025-06-02 13:22:13.699208 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-02 13:22:13.699214 | orchestrator | Monday 02 June 2025 13:18:25 +0000 (0:00:00.259) 0:02:14.665 *********** 2025-06-02 13:22:13.699220 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.699230 | orchestrator | 2025-06-02 13:22:13.699237 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-02 13:22:13.699243 | orchestrator | Monday 02 June 2025 13:18:26 +0000 (0:00:00.842) 0:02:15.508 *********** 2025-06-02 13:22:13.699254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:22:13.699269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:22:13.699283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:22:13.699290 | orchestrator | 2025-06-02 13:22:13.699297 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-02 13:22:13.699306 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:04.582) 0:02:20.090 *********** 2025-06-02 13:22:13.699331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:22:13.699350 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.699362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:22:13.699369 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.699384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:22:13.699396 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.699402 | orchestrator | 2025-06-02 13:22:13.699409 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-02 13:22:13.699415 | orchestrator | Monday 02 June 2025 13:18:31 +0000 (0:00:00.895) 0:02:20.985 *********** 2025-06-02 13:22:13.699421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 13:22:13.699429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 13:22:13.699437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 13:22:13.699444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 13:22:13.699450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 13:22:13.699457 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.699463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 13:22:13.699469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 13:22:13.699484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 13:22:13.699490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 13:22:13.699496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 13:22:13.699548 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.699556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 13:22:13.699562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 13:22:13.699569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 13:22:13.699575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 13:22:13.699581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 13:22:13.699587 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.699593 | orchestrator | 2025-06-02 13:22:13.699599 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-02 13:22:13.699606 | orchestrator | Monday 02 June 2025 13:18:32 +0000 (0:00:01.135) 0:02:22.120 *********** 2025-06-02 13:22:13.699612 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.699618 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.699624 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.699630 | orchestrator | 2025-06-02 13:22:13.699636 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-02 13:22:13.699642 | orchestrator | Monday 02 June 2025 13:18:34 +0000 (0:00:01.607) 0:02:23.728 *********** 2025-06-02 13:22:13.699648 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.699654 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.699660 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.699666 | orchestrator | 2025-06-02 13:22:13.699672 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-02 13:22:13.699679 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:01.962) 0:02:25.690 *********** 2025-06-02 13:22:13.699689 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.699695 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.699701 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.699707 | orchestrator | 2025-06-02 13:22:13.699713 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-02 13:22:13.699719 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.327) 0:02:26.017 *********** 2025-06-02 13:22:13.699725 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.699731 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.699737 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.699743 | orchestrator | 2025-06-02 13:22:13.699749 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-02 13:22:13.699755 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.293) 0:02:26.311 *********** 2025-06-02 13:22:13.699761 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.699767 | orchestrator | 2025-06-02 13:22:13.699774 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-02 13:22:13.699780 | orchestrator | Monday 02 June 2025 13:18:38 +0000 (0:00:01.160) 0:02:27.471 *********** 2025-06-02 13:22:13.699795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:22:13.699803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:22:13.699810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:22:13.699817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:22:13.699828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:22:13.699839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:22:13.699849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:22:13.699856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:22:13.699862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:22:13.699873 | orchestrator | 2025-06-02 13:22:13.699879 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-02 13:22:13.699885 | orchestrator | Monday 02 June 2025 13:18:42 +0000 (0:00:04.431) 0:02:31.903 *********** 2025-06-02 13:22:13.699892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:22:13.699902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:22:13.699912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:22:13.699918 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.699925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:22:13.699932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:22:13.699942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:22:13.699948 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.699979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:22:13.699995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:22:13.700008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:22:13.700014 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.700020 | orchestrator | 2025-06-02 13:22:13.700026 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-02 13:22:13.700033 | orchestrator | Monday 02 June 2025 13:18:42 +0000 (0:00:00.517) 0:02:32.420 *********** 2025-06-02 13:22:13.700039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 13:22:13.700051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 13:22:13.700057 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.700063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 13:22:13.700070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 13:22:13.700076 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.700082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 13:22:13.700089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 13:22:13.700095 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.700101 | orchestrator | 2025-06-02 13:22:13.700108 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-02 13:22:13.700114 | orchestrator | Monday 02 June 2025 13:18:43 +0000 (0:00:00.933) 0:02:33.354 *********** 2025-06-02 13:22:13.700120 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.700126 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.700132 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.700138 | orchestrator | 2025-06-02 13:22:13.700144 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-02 13:22:13.700150 | orchestrator | Monday 02 June 2025 13:18:45 +0000 (0:00:01.239) 0:02:34.593 *********** 2025-06-02 13:22:13.700156 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.700162 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.700168 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.700174 | orchestrator | 2025-06-02 13:22:13.700180 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-02 13:22:13.700186 | orchestrator | Monday 02 June 2025 13:18:47 +0000 (0:00:01.905) 0:02:36.499 *********** 2025-06-02 13:22:13.700440 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.700452 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.700459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.700465 | orchestrator | 2025-06-02 13:22:13.700471 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-02 13:22:13.700477 | orchestrator | Monday 02 June 2025 13:18:47 +0000 (0:00:00.303) 0:02:36.803 *********** 2025-06-02 13:22:13.700483 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.700490 | orchestrator | 2025-06-02 13:22:13.700496 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-02 13:22:13.700502 | orchestrator | Monday 02 June 2025 13:18:48 +0000 (0:00:01.171) 0:02:37.974 *********** 2025-06-02 13:22:13.700516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:22:13.700531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.700538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:22:13.700545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.700596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:22:13.700615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.700622 | orchestrator | 2025-06-02 13:22:13.700628 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-02 13:22:13.700634 | orchestrator | Monday 02 June 2025 13:18:51 +0000 (0:00:03.145) 0:02:41.119 *********** 2025-06-02 13:22:13.700641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:22:13.700648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.700654 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.700700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:22:13.700714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.700725 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.700731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:22:13.700738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.700744 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.700750 | orchestrator | 2025-06-02 13:22:13.700765 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-02 13:22:13.700772 | orchestrator | Monday 02 June 2025 13:18:52 +0000 (0:00:00.637) 0:02:41.756 *********** 2025-06-02 13:22:13.700778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 13:22:13.700785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 13:22:13.700791 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.700797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 13:22:13.700804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 13:22:13.700810 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.701117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 13:22:13.701128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 13:22:13.701276 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.701288 | orchestrator | 2025-06-02 13:22:13.701295 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-02 13:22:13.701301 | orchestrator | Monday 02 June 2025 13:18:53 +0000 (0:00:01.339) 0:02:43.095 *********** 2025-06-02 13:22:13.701307 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.701313 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.701319 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.701326 | orchestrator | 2025-06-02 13:22:13.701332 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-02 13:22:13.701338 | orchestrator | Monday 02 June 2025 13:18:54 +0000 (0:00:01.274) 0:02:44.370 *********** 2025-06-02 13:22:13.701344 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.701350 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.701356 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.701362 | orchestrator | 2025-06-02 13:22:13.701368 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-02 13:22:13.701374 | orchestrator | Monday 02 June 2025 13:18:56 +0000 (0:00:01.949) 0:02:46.320 *********** 2025-06-02 13:22:13.701386 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.701393 | orchestrator | 2025-06-02 13:22:13.701399 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-02 13:22:13.701405 | orchestrator | Monday 02 June 2025 13:18:57 +0000 (0:00:01.004) 0:02:47.324 *********** 2025-06-02 13:22:13.701412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 13:22:13.701419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 13:22:13.701504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 13:22:13.701535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701602 | orchestrator | 2025-06-02 13:22:13.701608 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-02 13:22:13.701615 | orchestrator | Monday 02 June 2025 13:19:01 +0000 (0:00:03.581) 0:02:50.906 *********** 2025-06-02 13:22:13.701621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 13:22:13.701628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701651 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.701699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 13:22:13.701712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701731 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.701737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 13:22:13.701788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.701854 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.701861 | orchestrator | 2025-06-02 13:22:13.701867 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-02 13:22:13.701874 | orchestrator | Monday 02 June 2025 13:19:02 +0000 (0:00:00.710) 0:02:51.616 *********** 2025-06-02 13:22:13.701880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 13:22:13.701886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 13:22:13.701893 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.701899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 13:22:13.701905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 13:22:13.701921 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.701928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 13:22:13.701934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 13:22:13.701940 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.701946 | orchestrator | 2025-06-02 13:22:13.702089 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-02 13:22:13.702104 | orchestrator | Monday 02 June 2025 13:19:03 +0000 (0:00:00.855) 0:02:52.472 *********** 2025-06-02 13:22:13.702111 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.702117 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.702123 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.702130 | orchestrator | 2025-06-02 13:22:13.702136 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-02 13:22:13.702142 | orchestrator | Monday 02 June 2025 13:19:04 +0000 (0:00:01.687) 0:02:54.160 *********** 2025-06-02 13:22:13.702148 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.702154 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.702160 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.702166 | orchestrator | 2025-06-02 13:22:13.702173 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-02 13:22:13.702179 | orchestrator | Monday 02 June 2025 13:19:06 +0000 (0:00:02.153) 0:02:56.314 *********** 2025-06-02 13:22:13.702185 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.702191 | orchestrator | 2025-06-02 13:22:13.702197 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-02 13:22:13.702203 | orchestrator | Monday 02 June 2025 13:19:08 +0000 (0:00:01.165) 0:02:57.479 *********** 2025-06-02 13:22:13.702210 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:22:13.702216 | orchestrator | 2025-06-02 13:22:13.702222 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-02 13:22:13.702228 | orchestrator | Monday 02 June 2025 13:19:11 +0000 (0:00:03.056) 0:03:00.535 *********** 2025-06-02 13:22:13.702303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:22:13.702322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 13:22:13.702329 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.702375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:22:13.702385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 13:22:13.702395 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.702402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:22:13.702414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 13:22:13.702420 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.702426 | orchestrator | 2025-06-02 13:22:13.702431 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-02 13:22:13.702436 | orchestrator | Monday 02 June 2025 13:19:13 +0000 (0:00:02.457) 0:03:02.993 *********** 2025-06-02 13:22:13.702482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:22:13.702495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 13:22:13.702500 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.702506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:22:13.702546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 13:22:13.702554 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.702563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:22:13.702573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 13:22:13.702579 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.702584 | orchestrator | 2025-06-02 13:22:13.702590 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-02 13:22:13.702624 | orchestrator | Monday 02 June 2025 13:19:15 +0000 (0:00:01.823) 0:03:04.817 *********** 2025-06-02 13:22:13.702631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 13:22:13.702690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 13:22:13.702699 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.702706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 13:22:13.702716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 13:22:13.702727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 13:22:13.702733 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.702739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 13:22:13.702745 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.702751 | orchestrator | 2025-06-02 13:22:13.702757 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-02 13:22:13.702763 | orchestrator | Monday 02 June 2025 13:19:17 +0000 (0:00:01.996) 0:03:06.814 *********** 2025-06-02 13:22:13.702769 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.702775 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.702781 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.702786 | orchestrator | 2025-06-02 13:22:13.702791 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-02 13:22:13.702797 | orchestrator | Monday 02 June 2025 13:19:19 +0000 (0:00:01.752) 0:03:08.566 *********** 2025-06-02 13:22:13.702802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.702807 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.702812 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.702818 | orchestrator | 2025-06-02 13:22:13.702845 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-02 13:22:13.702852 | orchestrator | Monday 02 June 2025 13:19:20 +0000 (0:00:01.409) 0:03:09.976 *********** 2025-06-02 13:22:13.702867 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.702873 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.702878 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.702883 | orchestrator | 2025-06-02 13:22:13.702889 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-02 13:22:13.702894 | orchestrator | Monday 02 June 2025 13:19:20 +0000 (0:00:00.295) 0:03:10.272 *********** 2025-06-02 13:22:13.702900 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.702905 | orchestrator | 2025-06-02 13:22:13.702910 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-02 13:22:13.702916 | orchestrator | Monday 02 June 2025 13:19:21 +0000 (0:00:00.983) 0:03:11.255 *********** 2025-06-02 13:22:13.702976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 13:22:13.703019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 13:22:13.703037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 13:22:13.703043 | orchestrator | 2025-06-02 13:22:13.703048 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-02 13:22:13.703054 | orchestrator | Monday 02 June 2025 13:19:23 +0000 (0:00:01.452) 0:03:12.708 *********** 2025-06-02 13:22:13.703060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 13:22:13.703065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.703071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 13:22:13.703077 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.703130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 13:22:13.703139 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.703144 | orchestrator | 2025-06-02 13:22:13.703149 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-02 13:22:13.703155 | orchestrator | Monday 02 June 2025 13:19:23 +0000 (0:00:00.420) 0:03:13.128 *********** 2025-06-02 13:22:13.703167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 13:22:13.703173 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.703178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 13:22:13.703184 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.703189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 13:22:13.703195 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.703200 | orchestrator | 2025-06-02 13:22:13.703206 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-02 13:22:13.703211 | orchestrator | Monday 02 June 2025 13:19:24 +0000 (0:00:00.569) 0:03:13.698 *********** 2025-06-02 13:22:13.703216 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.703221 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.703227 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.703232 | orchestrator | 2025-06-02 13:22:13.703237 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-02 13:22:13.703243 | orchestrator | Monday 02 June 2025 13:19:25 +0000 (0:00:00.777) 0:03:14.475 *********** 2025-06-02 13:22:13.703248 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.703253 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.703258 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.703264 | orchestrator | 2025-06-02 13:22:13.703269 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-02 13:22:13.703274 | orchestrator | Monday 02 June 2025 13:19:26 +0000 (0:00:01.194) 0:03:15.669 *********** 2025-06-02 13:22:13.703279 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.703285 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.703290 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.703295 | orchestrator | 2025-06-02 13:22:13.703301 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-02 13:22:13.703306 | orchestrator | Monday 02 June 2025 13:19:26 +0000 (0:00:00.304) 0:03:15.974 *********** 2025-06-02 13:22:13.703311 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.703316 | orchestrator | 2025-06-02 13:22:13.703322 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-02 13:22:13.703331 | orchestrator | Monday 02 June 2025 13:19:27 +0000 (0:00:01.355) 0:03:17.329 *********** 2025-06-02 13:22:13.703337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:22:13.703380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:22:13.703451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 13:22:13.703499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 13:22:13.703613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.703636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.703715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.703832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.703838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.703850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.703903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.703920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.703931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:22:13.703937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 13:22:13.704082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.704269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704286 | orchestrator | 2025-06-02 13:22:13.704292 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-02 13:22:13.704297 | orchestrator | Monday 02 June 2025 13:19:32 +0000 (0:00:04.221) 0:03:21.551 *********** 2025-06-02 13:22:13.704303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:22:13.704309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 13:22:13.704379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:22:13.704443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 13:22:13.704538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:22:13.704591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.704624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704711 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.704751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 13:22:13.704769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.704866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704918 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.704923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 13:22:13.704929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 13:22:13.704941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:22:13.704984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.704992 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.704997 | orchestrator | 2025-06-02 13:22:13.705003 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-02 13:22:13.705008 | orchestrator | Monday 02 June 2025 13:19:33 +0000 (0:00:01.420) 0:03:22.972 *********** 2025-06-02 13:22:13.705017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 13:22:13.705023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 13:22:13.705029 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.705034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 13:22:13.705039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 13:22:13.705045 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.705050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 13:22:13.705055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 13:22:13.705061 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.705066 | orchestrator | 2025-06-02 13:22:13.705072 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-02 13:22:13.705077 | orchestrator | Monday 02 June 2025 13:19:35 +0000 (0:00:02.037) 0:03:25.009 *********** 2025-06-02 13:22:13.705082 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.705087 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.705093 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.705098 | orchestrator | 2025-06-02 13:22:13.705103 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-02 13:22:13.705109 | orchestrator | Monday 02 June 2025 13:19:36 +0000 (0:00:01.287) 0:03:26.297 *********** 2025-06-02 13:22:13.705114 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.705119 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.705124 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.705130 | orchestrator | 2025-06-02 13:22:13.705166 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-02 13:22:13.705172 | orchestrator | Monday 02 June 2025 13:19:38 +0000 (0:00:02.077) 0:03:28.375 *********** 2025-06-02 13:22:13.705178 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.705188 | orchestrator | 2025-06-02 13:22:13.705210 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-02 13:22:13.705216 | orchestrator | Monday 02 June 2025 13:19:40 +0000 (0:00:01.169) 0:03:29.544 *********** 2025-06-02 13:22:13.705222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.705247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.705254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.705260 | orchestrator | 2025-06-02 13:22:13.705265 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-02 13:22:13.705271 | orchestrator | Monday 02 June 2025 13:19:43 +0000 (0:00:03.218) 0:03:32.763 *********** 2025-06-02 13:22:13.705277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.705287 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.705292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.705298 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.705318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.705382 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.705400 | orchestrator | 2025-06-02 13:22:13.705406 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-02 13:22:13.705414 | orchestrator | Monday 02 June 2025 13:19:43 +0000 (0:00:00.507) 0:03:33.271 *********** 2025-06-02 13:22:13.705420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705432 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.705437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.705453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705469 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.705474 | orchestrator | 2025-06-02 13:22:13.705481 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-02 13:22:13.705487 | orchestrator | Monday 02 June 2025 13:19:44 +0000 (0:00:00.759) 0:03:34.030 *********** 2025-06-02 13:22:13.705493 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.705499 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.705505 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.705512 | orchestrator | 2025-06-02 13:22:13.705518 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-02 13:22:13.705524 | orchestrator | Monday 02 June 2025 13:19:46 +0000 (0:00:01.550) 0:03:35.581 *********** 2025-06-02 13:22:13.705530 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.705536 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.705542 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.705549 | orchestrator | 2025-06-02 13:22:13.705555 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-02 13:22:13.705561 | orchestrator | Monday 02 June 2025 13:19:48 +0000 (0:00:01.983) 0:03:37.565 *********** 2025-06-02 13:22:13.705568 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.705574 | orchestrator | 2025-06-02 13:22:13.705580 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-02 13:22:13.705586 | orchestrator | Monday 02 June 2025 13:19:49 +0000 (0:00:01.216) 0:03:38.781 *********** 2025-06-02 13:22:13.705616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.705628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.705653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.705693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705710 | orchestrator | 2025-06-02 13:22:13.705716 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-02 13:22:13.705722 | orchestrator | Monday 02 June 2025 13:19:53 +0000 (0:00:04.075) 0:03:42.857 *********** 2025-06-02 13:22:13.705729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.705750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.705778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.705785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705797 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.705819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.705830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.705846 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.705852 | orchestrator | 2025-06-02 13:22:13.705857 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-02 13:22:13.705863 | orchestrator | Monday 02 June 2025 13:19:54 +0000 (0:00:00.752) 0:03:43.610 *********** 2025-06-02 13:22:13.705868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705891 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.705897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 13:22:13.705946 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.705951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 13:22:13.706007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 13:22:13.706013 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706061 | orchestrator | 2025-06-02 13:22:13.706067 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-02 13:22:13.706073 | orchestrator | Monday 02 June 2025 13:19:54 +0000 (0:00:00.736) 0:03:44.346 *********** 2025-06-02 13:22:13.706078 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.706084 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.706089 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.706095 | orchestrator | 2025-06-02 13:22:13.706100 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-02 13:22:13.706105 | orchestrator | Monday 02 June 2025 13:19:56 +0000 (0:00:01.436) 0:03:45.783 *********** 2025-06-02 13:22:13.706111 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.706116 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.706121 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.706127 | orchestrator | 2025-06-02 13:22:13.706132 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-02 13:22:13.706138 | orchestrator | Monday 02 June 2025 13:19:58 +0000 (0:00:01.999) 0:03:47.782 *********** 2025-06-02 13:22:13.706143 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.706148 | orchestrator | 2025-06-02 13:22:13.706154 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-02 13:22:13.706159 | orchestrator | Monday 02 June 2025 13:20:00 +0000 (0:00:01.754) 0:03:49.537 *********** 2025-06-02 13:22:13.706165 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-02 13:22:13.706170 | orchestrator | 2025-06-02 13:22:13.706176 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-02 13:22:13.706181 | orchestrator | Monday 02 June 2025 13:20:01 +0000 (0:00:01.101) 0:03:50.638 *********** 2025-06-02 13:22:13.706187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 13:22:13.706193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 13:22:13.706199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 13:22:13.706205 | orchestrator | 2025-06-02 13:22:13.706214 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-02 13:22:13.706220 | orchestrator | Monday 02 June 2025 13:20:05 +0000 (0:00:04.085) 0:03:54.723 *********** 2025-06-02 13:22:13.706246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706253 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706265 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706281 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706287 | orchestrator | 2025-06-02 13:22:13.706292 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-02 13:22:13.706298 | orchestrator | Monday 02 June 2025 13:20:07 +0000 (0:00:01.797) 0:03:56.520 *********** 2025-06-02 13:22:13.706303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 13:22:13.706309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 13:22:13.706316 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 13:22:13.706327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 13:22:13.706332 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 13:22:13.706343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 13:22:13.706355 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706361 | orchestrator | 2025-06-02 13:22:13.706366 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 13:22:13.706372 | orchestrator | Monday 02 June 2025 13:20:09 +0000 (0:00:02.276) 0:03:58.797 *********** 2025-06-02 13:22:13.706377 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.706382 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.706388 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.706393 | orchestrator | 2025-06-02 13:22:13.706398 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 13:22:13.706404 | orchestrator | Monday 02 June 2025 13:20:11 +0000 (0:00:02.201) 0:04:00.999 *********** 2025-06-02 13:22:13.706409 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.706414 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.706420 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.706425 | orchestrator | 2025-06-02 13:22:13.706430 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-02 13:22:13.706436 | orchestrator | Monday 02 June 2025 13:20:14 +0000 (0:00:02.862) 0:04:03.861 *********** 2025-06-02 13:22:13.706441 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-02 13:22:13.706447 | orchestrator | 2025-06-02 13:22:13.706452 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-02 13:22:13.706473 | orchestrator | Monday 02 June 2025 13:20:15 +0000 (0:00:00.815) 0:04:04.677 *********** 2025-06-02 13:22:13.706479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706485 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706500 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706511 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706516 | orchestrator | 2025-06-02 13:22:13.706521 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-02 13:22:13.706526 | orchestrator | Monday 02 June 2025 13:20:16 +0000 (0:00:01.354) 0:04:06.031 *********** 2025-06-02 13:22:13.706531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706540 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706550 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 13:22:13.706559 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706564 | orchestrator | 2025-06-02 13:22:13.706569 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-02 13:22:13.706574 | orchestrator | Monday 02 June 2025 13:20:18 +0000 (0:00:01.590) 0:04:07.622 *********** 2025-06-02 13:22:13.706579 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706583 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706588 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706593 | orchestrator | 2025-06-02 13:22:13.706598 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 13:22:13.706615 | orchestrator | Monday 02 June 2025 13:20:19 +0000 (0:00:01.354) 0:04:08.976 *********** 2025-06-02 13:22:13.706621 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.706626 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.706631 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.706635 | orchestrator | 2025-06-02 13:22:13.706640 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 13:22:13.706645 | orchestrator | Monday 02 June 2025 13:20:21 +0000 (0:00:02.253) 0:04:11.230 *********** 2025-06-02 13:22:13.706650 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.706654 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.706659 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.706664 | orchestrator | 2025-06-02 13:22:13.706669 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-02 13:22:13.706674 | orchestrator | Monday 02 June 2025 13:20:24 +0000 (0:00:03.032) 0:04:14.263 *********** 2025-06-02 13:22:13.706678 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-02 13:22:13.706683 | orchestrator | 2025-06-02 13:22:13.706688 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-02 13:22:13.706695 | orchestrator | Monday 02 June 2025 13:20:25 +0000 (0:00:01.146) 0:04:15.409 *********** 2025-06-02 13:22:13.706700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 13:22:13.706709 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 13:22:13.706719 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 13:22:13.706729 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706733 | orchestrator | 2025-06-02 13:22:13.706738 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-02 13:22:13.706743 | orchestrator | Monday 02 June 2025 13:20:26 +0000 (0:00:01.035) 0:04:16.445 *********** 2025-06-02 13:22:13.706748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 13:22:13.706753 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 13:22:13.706763 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 13:22:13.706787 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706792 | orchestrator | 2025-06-02 13:22:13.706797 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-02 13:22:13.706802 | orchestrator | Monday 02 June 2025 13:20:28 +0000 (0:00:01.224) 0:04:17.669 *********** 2025-06-02 13:22:13.706807 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.706811 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.706816 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.706821 | orchestrator | 2025-06-02 13:22:13.706829 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 13:22:13.706837 | orchestrator | Monday 02 June 2025 13:20:29 +0000 (0:00:01.737) 0:04:19.407 *********** 2025-06-02 13:22:13.706842 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.706846 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.706851 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.706856 | orchestrator | 2025-06-02 13:22:13.706861 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 13:22:13.706866 | orchestrator | Monday 02 June 2025 13:20:32 +0000 (0:00:02.381) 0:04:21.788 *********** 2025-06-02 13:22:13.706870 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.706875 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.706880 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.706885 | orchestrator | 2025-06-02 13:22:13.706890 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-02 13:22:13.706894 | orchestrator | Monday 02 June 2025 13:20:35 +0000 (0:00:03.040) 0:04:24.829 *********** 2025-06-02 13:22:13.706899 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.706904 | orchestrator | 2025-06-02 13:22:13.706909 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-02 13:22:13.706913 | orchestrator | Monday 02 June 2025 13:20:36 +0000 (0:00:01.300) 0:04:26.130 *********** 2025-06-02 13:22:13.706918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.706924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:22:13.706929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.706947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.706973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.706979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.706984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:22:13.706989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.706994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.707026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.707031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:22:13.707036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.707051 | orchestrator | 2025-06-02 13:22:13.707056 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-02 13:22:13.707061 | orchestrator | Monday 02 June 2025 13:20:40 +0000 (0:00:03.836) 0:04:29.966 *********** 2025-06-02 13:22:13.707079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.707091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:22:13.707097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.707112 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.707117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.707139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:22:13.707149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.707164 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.707169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.707174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:22:13.707196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:22:13.707210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:22:13.707215 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.707220 | orchestrator | 2025-06-02 13:22:13.707225 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-02 13:22:13.707229 | orchestrator | Monday 02 June 2025 13:20:41 +0000 (0:00:00.750) 0:04:30.717 *********** 2025-06-02 13:22:13.707234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 13:22:13.707239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 13:22:13.707244 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.707249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 13:22:13.707254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 13:22:13.707259 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.707263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 13:22:13.707268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 13:22:13.707277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.707282 | orchestrator | 2025-06-02 13:22:13.707287 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-02 13:22:13.707291 | orchestrator | Monday 02 June 2025 13:20:42 +0000 (0:00:00.903) 0:04:31.620 *********** 2025-06-02 13:22:13.707296 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.707301 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.707306 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.707310 | orchestrator | 2025-06-02 13:22:13.707315 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-02 13:22:13.707320 | orchestrator | Monday 02 June 2025 13:20:44 +0000 (0:00:01.882) 0:04:33.503 *********** 2025-06-02 13:22:13.707325 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.707330 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.707335 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.707339 | orchestrator | 2025-06-02 13:22:13.707344 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-02 13:22:13.707349 | orchestrator | Monday 02 June 2025 13:20:46 +0000 (0:00:02.083) 0:04:35.587 *********** 2025-06-02 13:22:13.707354 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.707358 | orchestrator | 2025-06-02 13:22:13.707363 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-02 13:22:13.707368 | orchestrator | Monday 02 June 2025 13:20:47 +0000 (0:00:01.334) 0:04:36.922 *********** 2025-06-02 13:22:13.707386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:22:13.707395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:22:13.707400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:22:13.707409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:22:13.707428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:22:13.707437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:22:13.707443 | orchestrator | 2025-06-02 13:22:13.707448 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-02 13:22:13.707452 | orchestrator | Monday 02 June 2025 13:20:53 +0000 (0:00:05.838) 0:04:42.761 *********** 2025-06-02 13:22:13.707457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:22:13.707466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:22:13.707471 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.707489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:22:13.707499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:22:13.707504 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.707509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:22:13.707518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:22:13.707523 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.707528 | orchestrator | 2025-06-02 13:22:13.707532 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-02 13:22:13.707537 | orchestrator | Monday 02 June 2025 13:20:53 +0000 (0:00:00.673) 0:04:43.434 *********** 2025-06-02 13:22:13.707542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 13:22:13.707560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 13:22:13.707566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 13:22:13.707571 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.707576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 13:22:13.707581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 13:22:13.707588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 13:22:13.707593 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.707598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 13:22:13.707603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 13:22:13.707612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 13:22:13.707617 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.707622 | orchestrator | 2025-06-02 13:22:13.707627 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-02 13:22:13.707632 | orchestrator | Monday 02 June 2025 13:20:54 +0000 (0:00:00.883) 0:04:44.318 *********** 2025-06-02 13:22:13.707636 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.707641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.707646 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.707651 | orchestrator | 2025-06-02 13:22:13.707655 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-02 13:22:13.707660 | orchestrator | Monday 02 June 2025 13:20:56 +0000 (0:00:01.205) 0:04:45.523 *********** 2025-06-02 13:22:13.707665 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.707670 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.707674 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.707679 | orchestrator | 2025-06-02 13:22:13.707684 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-02 13:22:13.707689 | orchestrator | Monday 02 June 2025 13:20:57 +0000 (0:00:01.044) 0:04:46.568 *********** 2025-06-02 13:22:13.707693 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.707698 | orchestrator | 2025-06-02 13:22:13.707703 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-02 13:22:13.707707 | orchestrator | Monday 02 June 2025 13:20:58 +0000 (0:00:01.832) 0:04:48.401 *********** 2025-06-02 13:22:13.707712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 13:22:13.707718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:22:13.707736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.707760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 13:22:13.707765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:22:13.707770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 13:22:13.707795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:22:13.707812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.707817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.707835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 13:22:13.707843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 13:22:13.707864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.707880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 13:22:13.707889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 13:22:13.707900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 13:22:13.707916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.707921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 13:22:13.707929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.707945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.707950 | orchestrator | 2025-06-02 13:22:13.707977 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-02 13:22:13.707985 | orchestrator | Monday 02 June 2025 13:21:03 +0000 (0:00:04.238) 0:04:52.640 *********** 2025-06-02 13:22:13.707993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 13:22:13.708001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:22:13.708009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 13:22:13.708020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:22:13.708038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.708067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.708075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 13:22:13.708089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 13:22:13.708095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 13:22:13.708101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 13:22:13.708106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.708141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.708146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708151 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 13:22:13.708160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:22:13.708169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.708190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 13:22:13.708195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 13:22:13.708200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:22:13.708218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:22:13.708223 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708227 | orchestrator | 2025-06-02 13:22:13.708232 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-02 13:22:13.708237 | orchestrator | Monday 02 June 2025 13:21:04 +0000 (0:00:01.374) 0:04:54.014 *********** 2025-06-02 13:22:13.708242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 13:22:13.708250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 13:22:13.708255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 13:22:13.708261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 13:22:13.708266 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 13:22:13.708276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 13:22:13.708281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 13:22:13.708286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 13:22:13.708291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 13:22:13.708299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 13:22:13.708304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 13:22:13.708309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 13:22:13.708314 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708318 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708323 | orchestrator | 2025-06-02 13:22:13.708328 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-02 13:22:13.708333 | orchestrator | Monday 02 June 2025 13:21:05 +0000 (0:00:01.004) 0:04:55.019 *********** 2025-06-02 13:22:13.708338 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708343 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708347 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708352 | orchestrator | 2025-06-02 13:22:13.708357 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-02 13:22:13.708362 | orchestrator | Monday 02 June 2025 13:21:05 +0000 (0:00:00.428) 0:04:55.447 *********** 2025-06-02 13:22:13.708369 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708374 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708379 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708383 | orchestrator | 2025-06-02 13:22:13.708388 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-02 13:22:13.708393 | orchestrator | Monday 02 June 2025 13:21:07 +0000 (0:00:01.446) 0:04:56.894 *********** 2025-06-02 13:22:13.708397 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.708402 | orchestrator | 2025-06-02 13:22:13.708407 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-02 13:22:13.708412 | orchestrator | Monday 02 June 2025 13:21:09 +0000 (0:00:01.799) 0:04:58.694 *********** 2025-06-02 13:22:13.708417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:22:13.708423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:22:13.708431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 13:22:13.708437 | orchestrator | 2025-06-02 13:22:13.708442 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-02 13:22:13.708446 | orchestrator | Monday 02 June 2025 13:21:11 +0000 (0:00:02.467) 0:05:01.162 *********** 2025-06-02 13:22:13.708469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 13:22:13.708477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 13:22:13.708482 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708487 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 13:22:13.708501 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708506 | orchestrator | 2025-06-02 13:22:13.708511 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-02 13:22:13.708516 | orchestrator | Monday 02 June 2025 13:21:12 +0000 (0:00:00.406) 0:05:01.569 *********** 2025-06-02 13:22:13.708520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 13:22:13.708525 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 13:22:13.708535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 13:22:13.708544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708549 | orchestrator | 2025-06-02 13:22:13.708554 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-02 13:22:13.708559 | orchestrator | Monday 02 June 2025 13:21:13 +0000 (0:00:01.042) 0:05:02.611 *********** 2025-06-02 13:22:13.708563 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708568 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708573 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708578 | orchestrator | 2025-06-02 13:22:13.708582 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-02 13:22:13.708590 | orchestrator | Monday 02 June 2025 13:21:13 +0000 (0:00:00.434) 0:05:03.046 *********** 2025-06-02 13:22:13.708595 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708600 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708609 | orchestrator | 2025-06-02 13:22:13.708614 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-02 13:22:13.708621 | orchestrator | Monday 02 June 2025 13:21:15 +0000 (0:00:01.478) 0:05:04.525 *********** 2025-06-02 13:22:13.708626 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:22:13.708631 | orchestrator | 2025-06-02 13:22:13.708636 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-02 13:22:13.708640 | orchestrator | Monday 02 June 2025 13:21:16 +0000 (0:00:01.910) 0:05:06.435 *********** 2025-06-02 13:22:13.708648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.708657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.708662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.708667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.708679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.708687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 13:22:13.708692 | orchestrator | 2025-06-02 13:22:13.708697 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-02 13:22:13.708702 | orchestrator | Monday 02 June 2025 13:21:23 +0000 (0:00:06.066) 0:05:12.502 *********** 2025-06-02 13:22:13.708707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.708712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.708717 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.708736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.708741 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.708751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 13:22:13.708756 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708761 | orchestrator | 2025-06-02 13:22:13.708766 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-02 13:22:13.708771 | orchestrator | Monday 02 June 2025 13:21:23 +0000 (0:00:00.619) 0:05:13.121 *********** 2025-06-02 13:22:13.708776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708829 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 13:22:13.708854 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708858 | orchestrator | 2025-06-02 13:22:13.708863 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-02 13:22:13.708868 | orchestrator | Monday 02 June 2025 13:21:25 +0000 (0:00:01.463) 0:05:14.584 *********** 2025-06-02 13:22:13.708873 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.708877 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.708882 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.708887 | orchestrator | 2025-06-02 13:22:13.708892 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-02 13:22:13.708897 | orchestrator | Monday 02 June 2025 13:21:26 +0000 (0:00:01.285) 0:05:15.870 *********** 2025-06-02 13:22:13.708901 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.708906 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.708911 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.708916 | orchestrator | 2025-06-02 13:22:13.708920 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-02 13:22:13.708925 | orchestrator | Monday 02 June 2025 13:21:28 +0000 (0:00:02.080) 0:05:17.951 *********** 2025-06-02 13:22:13.708930 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708935 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708945 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708950 | orchestrator | 2025-06-02 13:22:13.708970 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-02 13:22:13.708976 | orchestrator | Monday 02 June 2025 13:21:28 +0000 (0:00:00.305) 0:05:18.256 *********** 2025-06-02 13:22:13.708981 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.708985 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.708990 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.708995 | orchestrator | 2025-06-02 13:22:13.709000 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-02 13:22:13.709004 | orchestrator | Monday 02 June 2025 13:21:29 +0000 (0:00:00.737) 0:05:18.994 *********** 2025-06-02 13:22:13.709009 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709014 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709018 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709023 | orchestrator | 2025-06-02 13:22:13.709028 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-02 13:22:13.709033 | orchestrator | Monday 02 June 2025 13:21:29 +0000 (0:00:00.424) 0:05:19.419 *********** 2025-06-02 13:22:13.709040 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709045 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709050 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709055 | orchestrator | 2025-06-02 13:22:13.709059 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-02 13:22:13.709064 | orchestrator | Monday 02 June 2025 13:21:30 +0000 (0:00:00.325) 0:05:19.744 *********** 2025-06-02 13:22:13.709069 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709074 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709078 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709083 | orchestrator | 2025-06-02 13:22:13.709088 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-02 13:22:13.709093 | orchestrator | Monday 02 June 2025 13:21:30 +0000 (0:00:00.356) 0:05:20.101 *********** 2025-06-02 13:22:13.709097 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709102 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709107 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709111 | orchestrator | 2025-06-02 13:22:13.709116 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-02 13:22:13.709123 | orchestrator | Monday 02 June 2025 13:21:31 +0000 (0:00:00.864) 0:05:20.965 *********** 2025-06-02 13:22:13.709128 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709133 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709138 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709143 | orchestrator | 2025-06-02 13:22:13.709147 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-02 13:22:13.709152 | orchestrator | Monday 02 June 2025 13:21:32 +0000 (0:00:00.675) 0:05:21.640 *********** 2025-06-02 13:22:13.709157 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709161 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709166 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709171 | orchestrator | 2025-06-02 13:22:13.709176 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-02 13:22:13.709181 | orchestrator | Monday 02 June 2025 13:21:32 +0000 (0:00:00.335) 0:05:21.975 *********** 2025-06-02 13:22:13.709185 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709190 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709195 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709199 | orchestrator | 2025-06-02 13:22:13.709204 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-02 13:22:13.709209 | orchestrator | Monday 02 June 2025 13:21:33 +0000 (0:00:01.224) 0:05:23.200 *********** 2025-06-02 13:22:13.709214 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709218 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709223 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709228 | orchestrator | 2025-06-02 13:22:13.709238 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-02 13:22:13.709243 | orchestrator | Monday 02 June 2025 13:21:34 +0000 (0:00:00.852) 0:05:24.053 *********** 2025-06-02 13:22:13.709247 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709252 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709257 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709262 | orchestrator | 2025-06-02 13:22:13.709266 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-02 13:22:13.709271 | orchestrator | Monday 02 June 2025 13:21:35 +0000 (0:00:00.876) 0:05:24.929 *********** 2025-06-02 13:22:13.709276 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.709281 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.709285 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.709290 | orchestrator | 2025-06-02 13:22:13.709295 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-02 13:22:13.709300 | orchestrator | Monday 02 June 2025 13:21:40 +0000 (0:00:05.131) 0:05:30.061 *********** 2025-06-02 13:22:13.709305 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709310 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709314 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709319 | orchestrator | 2025-06-02 13:22:13.709324 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-02 13:22:13.709329 | orchestrator | Monday 02 June 2025 13:21:44 +0000 (0:00:03.544) 0:05:33.606 *********** 2025-06-02 13:22:13.709333 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.709338 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.709343 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.709348 | orchestrator | 2025-06-02 13:22:13.709352 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-02 13:22:13.709357 | orchestrator | Monday 02 June 2025 13:21:57 +0000 (0:00:13.308) 0:05:46.914 *********** 2025-06-02 13:22:13.709362 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709367 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709372 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709376 | orchestrator | 2025-06-02 13:22:13.709381 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-02 13:22:13.709386 | orchestrator | Monday 02 June 2025 13:21:58 +0000 (0:00:00.774) 0:05:47.689 *********** 2025-06-02 13:22:13.709391 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:22:13.709395 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:22:13.709400 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:22:13.709405 | orchestrator | 2025-06-02 13:22:13.709410 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-02 13:22:13.709414 | orchestrator | Monday 02 June 2025 13:22:07 +0000 (0:00:09.227) 0:05:56.916 *********** 2025-06-02 13:22:13.709419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709424 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709429 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709433 | orchestrator | 2025-06-02 13:22:13.709438 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-02 13:22:13.709443 | orchestrator | Monday 02 June 2025 13:22:08 +0000 (0:00:00.854) 0:05:57.770 *********** 2025-06-02 13:22:13.709448 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709452 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709457 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709462 | orchestrator | 2025-06-02 13:22:13.709466 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-02 13:22:13.709471 | orchestrator | Monday 02 June 2025 13:22:08 +0000 (0:00:00.446) 0:05:58.217 *********** 2025-06-02 13:22:13.709476 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709481 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709488 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709493 | orchestrator | 2025-06-02 13:22:13.709498 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-02 13:22:13.709506 | orchestrator | Monday 02 June 2025 13:22:09 +0000 (0:00:00.371) 0:05:58.589 *********** 2025-06-02 13:22:13.709511 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709520 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709525 | orchestrator | 2025-06-02 13:22:13.709530 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-02 13:22:13.709535 | orchestrator | Monday 02 June 2025 13:22:09 +0000 (0:00:00.376) 0:05:58.966 *********** 2025-06-02 13:22:13.709539 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709544 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709549 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709553 | orchestrator | 2025-06-02 13:22:13.709558 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-02 13:22:13.709563 | orchestrator | Monday 02 June 2025 13:22:10 +0000 (0:00:00.801) 0:05:59.768 *********** 2025-06-02 13:22:13.709568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:22:13.709584 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:22:13.709590 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:22:13.709595 | orchestrator | 2025-06-02 13:22:13.709599 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-02 13:22:13.709604 | orchestrator | Monday 02 June 2025 13:22:10 +0000 (0:00:00.362) 0:06:00.130 *********** 2025-06-02 13:22:13.709609 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709614 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709619 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709623 | orchestrator | 2025-06-02 13:22:13.709628 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-02 13:22:13.709633 | orchestrator | Monday 02 June 2025 13:22:11 +0000 (0:00:00.972) 0:06:01.103 *********** 2025-06-02 13:22:13.709638 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:22:13.709642 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:22:13.709647 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:22:13.709652 | orchestrator | 2025-06-02 13:22:13.709657 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:22:13.709662 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 13:22:13.709667 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 13:22:13.709672 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 13:22:13.709677 | orchestrator | 2025-06-02 13:22:13.709681 | orchestrator | 2025-06-02 13:22:13.709686 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:22:13.709691 | orchestrator | Monday 02 June 2025 13:22:12 +0000 (0:00:00.876) 0:06:01.979 *********** 2025-06-02 13:22:13.709696 | orchestrator | =============================================================================== 2025-06-02 13:22:13.709700 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.31s 2025-06-02 13:22:13.709705 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.23s 2025-06-02 13:22:13.709710 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.07s 2025-06-02 13:22:13.709715 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.84s 2025-06-02 13:22:13.709719 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.69s 2025-06-02 13:22:13.709724 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.13s 2025-06-02 13:22:13.709729 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.05s 2025-06-02 13:22:13.709734 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.58s 2025-06-02 13:22:13.709738 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.43s 2025-06-02 13:22:13.709746 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.24s 2025-06-02 13:22:13.709751 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.22s 2025-06-02 13:22:13.709756 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.16s 2025-06-02 13:22:13.709761 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2025-06-02 13:22:13.709765 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.08s 2025-06-02 13:22:13.709770 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.02s 2025-06-02 13:22:13.709775 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.94s 2025-06-02 13:22:13.709780 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.84s 2025-06-02 13:22:13.709784 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.73s 2025-06-02 13:22:13.709789 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.58s 2025-06-02 13:22:13.709794 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.54s 2025-06-02 13:22:13.709799 | orchestrator | 2025-06-02 13:22:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:16.733937 | orchestrator | 2025-06-02 13:22:16 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:16.734131 | orchestrator | 2025-06-02 13:22:16 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:16.734147 | orchestrator | 2025-06-02 13:22:16 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:16.734159 | orchestrator | 2025-06-02 13:22:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:19.812532 | orchestrator | 2025-06-02 13:22:19 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:19.814251 | orchestrator | 2025-06-02 13:22:19 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:19.816402 | orchestrator | 2025-06-02 13:22:19 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:19.816445 | orchestrator | 2025-06-02 13:22:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:22.878068 | orchestrator | 2025-06-02 13:22:22 | INFO  | Task ec511412-0ebf-443e-b9f0-bee219a5ade7 is in state STARTED 2025-06-02 13:22:22.878167 | orchestrator | 2025-06-02 13:22:22 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:22.878179 | orchestrator | 2025-06-02 13:22:22 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:22.878189 | orchestrator | 2025-06-02 13:22:22 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:22.878198 | orchestrator | 2025-06-02 13:22:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:25.907859 | orchestrator | 2025-06-02 13:22:25 | INFO  | Task ec511412-0ebf-443e-b9f0-bee219a5ade7 is in state STARTED 2025-06-02 13:22:25.908133 | orchestrator | 2025-06-02 13:22:25 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:25.909329 | orchestrator | 2025-06-02 13:22:25 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:25.911026 | orchestrator | 2025-06-02 13:22:25 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:25.911059 | orchestrator | 2025-06-02 13:22:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:28.949736 | orchestrator | 2025-06-02 13:22:28 | INFO  | Task ec511412-0ebf-443e-b9f0-bee219a5ade7 is in state STARTED 2025-06-02 13:22:28.951031 | orchestrator | 2025-06-02 13:22:28 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:28.953211 | orchestrator | 2025-06-02 13:22:28 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:28.953992 | orchestrator | 2025-06-02 13:22:28 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:28.954333 | orchestrator | 2025-06-02 13:22:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:31.991439 | orchestrator | 2025-06-02 13:22:31 | INFO  | Task ec511412-0ebf-443e-b9f0-bee219a5ade7 is in state STARTED 2025-06-02 13:22:31.991815 | orchestrator | 2025-06-02 13:22:31 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:31.992668 | orchestrator | 2025-06-02 13:22:31 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:31.993552 | orchestrator | 2025-06-02 13:22:31 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:31.993579 | orchestrator | 2025-06-02 13:22:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:35.032970 | orchestrator | 2025-06-02 13:22:35 | INFO  | Task ec511412-0ebf-443e-b9f0-bee219a5ade7 is in state STARTED 2025-06-02 13:22:35.033135 | orchestrator | 2025-06-02 13:22:35 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:35.033158 | orchestrator | 2025-06-02 13:22:35 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:35.033473 | orchestrator | 2025-06-02 13:22:35 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:35.033796 | orchestrator | 2025-06-02 13:22:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:38.068261 | orchestrator | 2025-06-02 13:22:38 | INFO  | Task ec511412-0ebf-443e-b9f0-bee219a5ade7 is in state SUCCESS 2025-06-02 13:22:38.068535 | orchestrator | 2025-06-02 13:22:38 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:38.069542 | orchestrator | 2025-06-02 13:22:38 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:38.071878 | orchestrator | 2025-06-02 13:22:38 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:38.071905 | orchestrator | 2025-06-02 13:22:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:41.100402 | orchestrator | 2025-06-02 13:22:41 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:41.101748 | orchestrator | 2025-06-02 13:22:41 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:41.101790 | orchestrator | 2025-06-02 13:22:41 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:41.101806 | orchestrator | 2025-06-02 13:22:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:44.132932 | orchestrator | 2025-06-02 13:22:44 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:44.137705 | orchestrator | 2025-06-02 13:22:44 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:44.140643 | orchestrator | 2025-06-02 13:22:44 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:44.140683 | orchestrator | 2025-06-02 13:22:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:47.178811 | orchestrator | 2025-06-02 13:22:47 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:47.178874 | orchestrator | 2025-06-02 13:22:47 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:47.179916 | orchestrator | 2025-06-02 13:22:47 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:47.180620 | orchestrator | 2025-06-02 13:22:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:50.232236 | orchestrator | 2025-06-02 13:22:50 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:50.233825 | orchestrator | 2025-06-02 13:22:50 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:50.234758 | orchestrator | 2025-06-02 13:22:50 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:50.235267 | orchestrator | 2025-06-02 13:22:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:53.290844 | orchestrator | 2025-06-02 13:22:53 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:53.293640 | orchestrator | 2025-06-02 13:22:53 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:53.295205 | orchestrator | 2025-06-02 13:22:53 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:53.295236 | orchestrator | 2025-06-02 13:22:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:56.344388 | orchestrator | 2025-06-02 13:22:56 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:56.346356 | orchestrator | 2025-06-02 13:22:56 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:56.348270 | orchestrator | 2025-06-02 13:22:56 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:56.348334 | orchestrator | 2025-06-02 13:22:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:22:59.391984 | orchestrator | 2025-06-02 13:22:59 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:22:59.392989 | orchestrator | 2025-06-02 13:22:59 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:22:59.394151 | orchestrator | 2025-06-02 13:22:59 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:22:59.394186 | orchestrator | 2025-06-02 13:22:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:02.432564 | orchestrator | 2025-06-02 13:23:02 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:02.434883 | orchestrator | 2025-06-02 13:23:02 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:02.436619 | orchestrator | 2025-06-02 13:23:02 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:02.436647 | orchestrator | 2025-06-02 13:23:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:05.485266 | orchestrator | 2025-06-02 13:23:05 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:05.486221 | orchestrator | 2025-06-02 13:23:05 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:05.488970 | orchestrator | 2025-06-02 13:23:05 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:05.489565 | orchestrator | 2025-06-02 13:23:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:08.539299 | orchestrator | 2025-06-02 13:23:08 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:08.539380 | orchestrator | 2025-06-02 13:23:08 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:08.539408 | orchestrator | 2025-06-02 13:23:08 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:08.539414 | orchestrator | 2025-06-02 13:23:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:11.584009 | orchestrator | 2025-06-02 13:23:11 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:11.586250 | orchestrator | 2025-06-02 13:23:11 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:11.587923 | orchestrator | 2025-06-02 13:23:11 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:11.587954 | orchestrator | 2025-06-02 13:23:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:14.636560 | orchestrator | 2025-06-02 13:23:14 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:14.638721 | orchestrator | 2025-06-02 13:23:14 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:14.641583 | orchestrator | 2025-06-02 13:23:14 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:14.641737 | orchestrator | 2025-06-02 13:23:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:17.690349 | orchestrator | 2025-06-02 13:23:17 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:17.692055 | orchestrator | 2025-06-02 13:23:17 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:17.694622 | orchestrator | 2025-06-02 13:23:17 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:17.694658 | orchestrator | 2025-06-02 13:23:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:20.752973 | orchestrator | 2025-06-02 13:23:20 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:20.756834 | orchestrator | 2025-06-02 13:23:20 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:20.757615 | orchestrator | 2025-06-02 13:23:20 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:20.758142 | orchestrator | 2025-06-02 13:23:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:23.804696 | orchestrator | 2025-06-02 13:23:23 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:23.805300 | orchestrator | 2025-06-02 13:23:23 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:23.809372 | orchestrator | 2025-06-02 13:23:23 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:23.809410 | orchestrator | 2025-06-02 13:23:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:26.853185 | orchestrator | 2025-06-02 13:23:26 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:26.854979 | orchestrator | 2025-06-02 13:23:26 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:26.856826 | orchestrator | 2025-06-02 13:23:26 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:26.856872 | orchestrator | 2025-06-02 13:23:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:29.901703 | orchestrator | 2025-06-02 13:23:29 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:29.903682 | orchestrator | 2025-06-02 13:23:29 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:29.905128 | orchestrator | 2025-06-02 13:23:29 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:29.905186 | orchestrator | 2025-06-02 13:23:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:32.961588 | orchestrator | 2025-06-02 13:23:32 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:32.963692 | orchestrator | 2025-06-02 13:23:32 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:32.966279 | orchestrator | 2025-06-02 13:23:32 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:32.966866 | orchestrator | 2025-06-02 13:23:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:36.017614 | orchestrator | 2025-06-02 13:23:36 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:36.019841 | orchestrator | 2025-06-02 13:23:36 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:36.022372 | orchestrator | 2025-06-02 13:23:36 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:36.022817 | orchestrator | 2025-06-02 13:23:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:39.069170 | orchestrator | 2025-06-02 13:23:39 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:39.069957 | orchestrator | 2025-06-02 13:23:39 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:39.073221 | orchestrator | 2025-06-02 13:23:39 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:39.073294 | orchestrator | 2025-06-02 13:23:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:42.115490 | orchestrator | 2025-06-02 13:23:42 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:42.117616 | orchestrator | 2025-06-02 13:23:42 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:42.120339 | orchestrator | 2025-06-02 13:23:42 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:42.120575 | orchestrator | 2025-06-02 13:23:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:45.178573 | orchestrator | 2025-06-02 13:23:45 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:45.184442 | orchestrator | 2025-06-02 13:23:45 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:45.186785 | orchestrator | 2025-06-02 13:23:45 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:45.187130 | orchestrator | 2025-06-02 13:23:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:48.234932 | orchestrator | 2025-06-02 13:23:48 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:48.237126 | orchestrator | 2025-06-02 13:23:48 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:48.239200 | orchestrator | 2025-06-02 13:23:48 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:48.239230 | orchestrator | 2025-06-02 13:23:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:51.279139 | orchestrator | 2025-06-02 13:23:51 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:51.283234 | orchestrator | 2025-06-02 13:23:51 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:51.285088 | orchestrator | 2025-06-02 13:23:51 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:51.286164 | orchestrator | 2025-06-02 13:23:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:54.327751 | orchestrator | 2025-06-02 13:23:54 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:54.331224 | orchestrator | 2025-06-02 13:23:54 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:54.333402 | orchestrator | 2025-06-02 13:23:54 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:54.333426 | orchestrator | 2025-06-02 13:23:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:23:57.379931 | orchestrator | 2025-06-02 13:23:57 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:23:57.380940 | orchestrator | 2025-06-02 13:23:57 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:23:57.382295 | orchestrator | 2025-06-02 13:23:57 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:23:57.382333 | orchestrator | 2025-06-02 13:23:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:00.425976 | orchestrator | 2025-06-02 13:24:00 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:00.427813 | orchestrator | 2025-06-02 13:24:00 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:00.429700 | orchestrator | 2025-06-02 13:24:00 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:24:00.429748 | orchestrator | 2025-06-02 13:24:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:03.477576 | orchestrator | 2025-06-02 13:24:03 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:03.478981 | orchestrator | 2025-06-02 13:24:03 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:03.480797 | orchestrator | 2025-06-02 13:24:03 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:24:03.480824 | orchestrator | 2025-06-02 13:24:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:06.525832 | orchestrator | 2025-06-02 13:24:06 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:06.527766 | orchestrator | 2025-06-02 13:24:06 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:06.529669 | orchestrator | 2025-06-02 13:24:06 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:24:06.529715 | orchestrator | 2025-06-02 13:24:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:09.570300 | orchestrator | 2025-06-02 13:24:09 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:09.572394 | orchestrator | 2025-06-02 13:24:09 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:09.574839 | orchestrator | 2025-06-02 13:24:09 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:24:09.575179 | orchestrator | 2025-06-02 13:24:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:12.624836 | orchestrator | 2025-06-02 13:24:12 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:12.626492 | orchestrator | 2025-06-02 13:24:12 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:12.628545 | orchestrator | 2025-06-02 13:24:12 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:24:12.628573 | orchestrator | 2025-06-02 13:24:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:15.694749 | orchestrator | 2025-06-02 13:24:15 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:15.698279 | orchestrator | 2025-06-02 13:24:15 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:15.699496 | orchestrator | 2025-06-02 13:24:15 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state STARTED 2025-06-02 13:24:15.699637 | orchestrator | 2025-06-02 13:24:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:18.756162 | orchestrator | 2025-06-02 13:24:18 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:18.760220 | orchestrator | 2025-06-02 13:24:18 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:18.764572 | orchestrator | 2025-06-02 13:24:18 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:18.774641 | orchestrator | 2025-06-02 13:24:18.774693 | orchestrator | None 2025-06-02 13:24:18.774707 | orchestrator | 2025-06-02 13:24:18.774719 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-02 13:24:18.774730 | orchestrator | 2025-06-02 13:24:18.774741 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 13:24:18.774752 | orchestrator | Monday 02 June 2025 13:13:47 +0000 (0:00:00.663) 0:00:00.663 *********** 2025-06-02 13:24:18.774764 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.774775 | orchestrator | 2025-06-02 13:24:18.774786 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 13:24:18.774797 | orchestrator | Monday 02 June 2025 13:13:48 +0000 (0:00:01.018) 0:00:01.681 *********** 2025-06-02 13:24:18.774843 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.774857 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.774868 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.774879 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.774890 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.774900 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.774911 | orchestrator | 2025-06-02 13:24:18.774922 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 13:24:18.774933 | orchestrator | Monday 02 June 2025 13:13:50 +0000 (0:00:01.452) 0:00:03.134 *********** 2025-06-02 13:24:18.774944 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.774955 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.774966 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.774976 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.774987 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.774998 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.775009 | orchestrator | 2025-06-02 13:24:18.775020 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 13:24:18.775031 | orchestrator | Monday 02 June 2025 13:13:51 +0000 (0:00:00.877) 0:00:04.012 *********** 2025-06-02 13:24:18.775042 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.775194 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.775209 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.775220 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.775231 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.775244 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.775258 | orchestrator | 2025-06-02 13:24:18.775271 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 13:24:18.775283 | orchestrator | Monday 02 June 2025 13:13:52 +0000 (0:00:00.845) 0:00:04.857 *********** 2025-06-02 13:24:18.775296 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.775309 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.775322 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.775334 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.775346 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.775381 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.775394 | orchestrator | 2025-06-02 13:24:18.775406 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 13:24:18.775419 | orchestrator | Monday 02 June 2025 13:13:52 +0000 (0:00:00.657) 0:00:05.515 *********** 2025-06-02 13:24:18.775432 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.775444 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.775455 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.775468 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.775480 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.775503 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.775516 | orchestrator | 2025-06-02 13:24:18.775529 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 13:24:18.775541 | orchestrator | Monday 02 June 2025 13:13:53 +0000 (0:00:00.570) 0:00:06.085 *********** 2025-06-02 13:24:18.775553 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.775565 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.775577 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.775589 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.775601 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.775614 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.775624 | orchestrator | 2025-06-02 13:24:18.775635 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 13:24:18.775646 | orchestrator | Monday 02 June 2025 13:13:54 +0000 (0:00:00.760) 0:00:06.845 *********** 2025-06-02 13:24:18.775657 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.775668 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.775679 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.775731 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.775743 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.775754 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.775765 | orchestrator | 2025-06-02 13:24:18.775776 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 13:24:18.775787 | orchestrator | Monday 02 June 2025 13:13:54 +0000 (0:00:00.797) 0:00:07.643 *********** 2025-06-02 13:24:18.775798 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.775809 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.775819 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.775830 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.775840 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.775851 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.775862 | orchestrator | 2025-06-02 13:24:18.775872 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 13:24:18.775883 | orchestrator | Monday 02 June 2025 13:13:55 +0000 (0:00:00.927) 0:00:08.570 *********** 2025-06-02 13:24:18.775894 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:24:18.775970 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:24:18.775982 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:24:18.775993 | orchestrator | 2025-06-02 13:24:18.776003 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 13:24:18.776038 | orchestrator | Monday 02 June 2025 13:13:56 +0000 (0:00:00.776) 0:00:09.347 *********** 2025-06-02 13:24:18.776051 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.776062 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.776108 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.776120 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.776131 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.776142 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.776152 | orchestrator | 2025-06-02 13:24:18.776177 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 13:24:18.776189 | orchestrator | Monday 02 June 2025 13:13:57 +0000 (0:00:01.165) 0:00:10.512 *********** 2025-06-02 13:24:18.776200 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:24:18.776220 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:24:18.776231 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:24:18.776242 | orchestrator | 2025-06-02 13:24:18.776252 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 13:24:18.776263 | orchestrator | Monday 02 June 2025 13:14:00 +0000 (0:00:02.952) 0:00:13.464 *********** 2025-06-02 13:24:18.776274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 13:24:18.776284 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 13:24:18.776295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 13:24:18.776306 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.776316 | orchestrator | 2025-06-02 13:24:18.776327 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 13:24:18.776337 | orchestrator | Monday 02 June 2025 13:14:01 +0000 (0:00:00.595) 0:00:14.060 *********** 2025-06-02 13:24:18.776349 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776362 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776373 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776384 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.776395 | orchestrator | 2025-06-02 13:24:18.776406 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 13:24:18.776417 | orchestrator | Monday 02 June 2025 13:14:02 +0000 (0:00:00.889) 0:00:14.949 *********** 2025-06-02 13:24:18.776435 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776448 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776460 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776471 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.776482 | orchestrator | 2025-06-02 13:24:18.776492 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 13:24:18.776503 | orchestrator | Monday 02 June 2025 13:14:02 +0000 (0:00:00.409) 0:00:15.359 *********** 2025-06-02 13:24:18.776516 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 13:13:58.430811', 'end': '2025-06-02 13:13:58.686516', 'delta': '0:00:00.255705', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776543 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 13:13:59.518922', 'end': '2025-06-02 13:13:59.790957', 'delta': '0:00:00.272035', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 13:14:00.288423', 'end': '2025-06-02 13:14:00.554600', 'delta': '0:00:00.266177', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.776567 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.776578 | orchestrator | 2025-06-02 13:24:18.776588 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 13:24:18.776599 | orchestrator | Monday 02 June 2025 13:14:02 +0000 (0:00:00.233) 0:00:15.592 *********** 2025-06-02 13:24:18.776610 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.776621 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.776632 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.776735 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.776747 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.776758 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.776768 | orchestrator | 2025-06-02 13:24:18.776779 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 13:24:18.776790 | orchestrator | Monday 02 June 2025 13:14:04 +0000 (0:00:01.173) 0:00:16.765 *********** 2025-06-02 13:24:18.776801 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.776811 | orchestrator | 2025-06-02 13:24:18.776822 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 13:24:18.776838 | orchestrator | Monday 02 June 2025 13:14:04 +0000 (0:00:00.652) 0:00:17.418 *********** 2025-06-02 13:24:18.776849 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.776860 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.776870 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.776881 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.776892 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.776902 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.776913 | orchestrator | 2025-06-02 13:24:18.776924 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 13:24:18.776934 | orchestrator | Monday 02 June 2025 13:14:06 +0000 (0:00:01.450) 0:00:18.869 *********** 2025-06-02 13:24:18.776945 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.776956 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.776974 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.776985 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.776996 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.777006 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.777017 | orchestrator | 2025-06-02 13:24:18.777028 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 13:24:18.777038 | orchestrator | Monday 02 June 2025 13:14:07 +0000 (0:00:01.308) 0:00:20.178 *********** 2025-06-02 13:24:18.777049 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777060 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.777091 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.777147 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.777163 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.777199 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.777210 | orchestrator | 2025-06-02 13:24:18.777221 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 13:24:18.777232 | orchestrator | Monday 02 June 2025 13:14:08 +0000 (0:00:00.818) 0:00:20.997 *********** 2025-06-02 13:24:18.777243 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777253 | orchestrator | 2025-06-02 13:24:18.777299 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 13:24:18.777312 | orchestrator | Monday 02 June 2025 13:14:08 +0000 (0:00:00.151) 0:00:21.148 *********** 2025-06-02 13:24:18.777323 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777363 | orchestrator | 2025-06-02 13:24:18.777376 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 13:24:18.777387 | orchestrator | Monday 02 June 2025 13:14:08 +0000 (0:00:00.214) 0:00:21.363 *********** 2025-06-02 13:24:18.777421 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777434 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.777465 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.777478 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.777488 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.777555 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.777568 | orchestrator | 2025-06-02 13:24:18.777578 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 13:24:18.777597 | orchestrator | Monday 02 June 2025 13:14:09 +0000 (0:00:00.750) 0:00:22.113 *********** 2025-06-02 13:24:18.777609 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777620 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.777685 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.777697 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.777708 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.777719 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.777730 | orchestrator | 2025-06-02 13:24:18.777741 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 13:24:18.777751 | orchestrator | Monday 02 June 2025 13:14:10 +0000 (0:00:01.063) 0:00:23.177 *********** 2025-06-02 13:24:18.777762 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777773 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.777783 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.777794 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.777804 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.777815 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.777826 | orchestrator | 2025-06-02 13:24:18.777836 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 13:24:18.777847 | orchestrator | Monday 02 June 2025 13:14:11 +0000 (0:00:00.777) 0:00:23.955 *********** 2025-06-02 13:24:18.777858 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777868 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.777879 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.777889 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.777900 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.777919 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.777929 | orchestrator | 2025-06-02 13:24:18.777940 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 13:24:18.777950 | orchestrator | Monday 02 June 2025 13:14:12 +0000 (0:00:00.963) 0:00:24.918 *********** 2025-06-02 13:24:18.777961 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.777972 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.777982 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.777993 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.778226 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.778239 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.778250 | orchestrator | 2025-06-02 13:24:18.778261 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 13:24:18.778271 | orchestrator | Monday 02 June 2025 13:14:13 +0000 (0:00:00.877) 0:00:25.795 *********** 2025-06-02 13:24:18.778282 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.778292 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.778336 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.778347 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.778358 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.778369 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.778379 | orchestrator | 2025-06-02 13:24:18.778390 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 13:24:18.778401 | orchestrator | Monday 02 June 2025 13:14:13 +0000 (0:00:00.824) 0:00:26.620 *********** 2025-06-02 13:24:18.778412 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.778445 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.778457 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.778468 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.778485 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.778496 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.778507 | orchestrator | 2025-06-02 13:24:18.778518 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 13:24:18.778529 | orchestrator | Monday 02 June 2025 13:14:14 +0000 (0:00:00.560) 0:00:27.180 *********** 2025-06-02 13:24:18.778540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.778735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.778752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.778762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part1', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part14', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part15', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part16', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.778946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.778957 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.778967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.778987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e', 'dm-uuid-LVM-FYOmAn8QQ4CpCK3nPuhb6AKp6cUG7OV8xhMDe9YbSZKR2ADWyLVKfEDPeTu0i5VR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb', 'dm-uuid-LVM-FdX4Vib8EEVapD4QYvfDgclCWcH1oEGiwuHzUUFwvvQOyEvbkIpXVImafRc4ZJhm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779199 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.779208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vXJrzG-5k29-eh0q-cywh-xAYl-Ab0M-C3cLoz', 'scsi-0QEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff', 'scsi-SQEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HWiNin-26Hq-V0s1-5H3G-Gy5a-yJiA-NlxMLB', 'scsi-0QEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5', 'scsi-SQEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4', 'scsi-SQEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e', 'dm-uuid-LVM-y5FVoJTWgnJjFFZL5Fwl5aMePA5QPqnhu1pcSxeWHi1uuDJvys6lSBv8CRPykUhf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e', 'dm-uuid-LVM-Rc0H5YoQMSbBO16r3zMWpGs0vLha2ANBHy7mua31QpB0Yg06fo9xDfXk9G0JGbgL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779404 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.779425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c', 'dm-uuid-LVM-lBPkncHf05z5HtoBkcX1eg1pWuqTRftdQebFih2hGl3yDJNEoA7jtK3elwOXvHPl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e', 'dm-uuid-LVM-DgHct2KkEKM5qxlUlYXVA6wsYZuSilPpcv1aL2fQ0o39nUSiMJGAmAVSgIxcjGRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:24:18.779692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ml01Mm-Eihy-BhtQ-obSe-5JAz-Lx7n-weQK6q', 'scsi-0QEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27', 'scsi-SQEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FN1XK3-XZ3w-OvDj-rY2x-MbI7-9UjC-5ttYQq', 'scsi-0QEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de', 'scsi-SQEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6qZlhN-Fcn9-PIrP-8M7s-Rgq5-4H2D-VocVNU', 'scsi-0QEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855', 'scsi-SQEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JX0vgn-GkDc-wZzb-ThgS-dL5d-eR68-PmwQqJ', 'scsi-0QEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5', 'scsi-SQEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da', 'scsi-SQEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85', 'scsi-SQEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779782 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.779794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:24:18.779803 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.779811 | orchestrator | 2025-06-02 13:24:18.779819 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 13:24:18.779828 | orchestrator | Monday 02 June 2025 13:14:15 +0000 (0:00:01.413) 0:00:28.593 *********** 2025-06-02 13:24:18.779857 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779865 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779874 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779890 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779899 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779907 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part1', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part14', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part15', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part16', 'scsi-SQEMU_QEMU_HARDDISK_cb0229d0-7921-4720-b37b-7f30618f5b03-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779959 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779967 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779975 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.779984 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780009 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780017 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780030 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780039 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780047 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.780059 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part1', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part14', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part15', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part16', 'scsi-SQEMU_QEMU_HARDDISK_83449818-063b-4895-9e97-f9ff707e075b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780089 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780104 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780113 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780128 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780140 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780149 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780157 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780170 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780178 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780191 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6ac710c-9f57-4dea-affc-5d36aeb63db5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780204 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-56-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780213 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.780226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e', 'dm-uuid-LVM-FYOmAn8QQ4CpCK3nPuhb6AKp6cUG7OV8xhMDe9YbSZKR2ADWyLVKfEDPeTu0i5VR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb', 'dm-uuid-LVM-FdX4Vib8EEVapD4QYvfDgclCWcH1oEGiwuHzUUFwvvQOyEvbkIpXVImafRc4ZJhm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780264 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.780273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780281 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780321 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vXJrzG-5k29-eh0q-cywh-xAYl-Ab0M-C3cLoz', 'scsi-0QEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff', 'scsi-SQEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HWiNin-26Hq-V0s1-5H3G-Gy5a-yJiA-NlxMLB', 'scsi-0QEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5', 'scsi-SQEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780405 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4', 'scsi-SQEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e', 'dm-uuid-LVM-y5FVoJTWgnJjFFZL5Fwl5aMePA5QPqnhu1pcSxeWHi1uuDJvys6lSBv8CRPykUhf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780439 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e', 'dm-uuid-LVM-Rc0H5YoQMSbBO16r3zMWpGs0vLha2ANBHy7mua31QpB0Yg06fo9xDfXk9G0JGbgL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780502 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780510 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780518 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780535 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'2025-06-02 13:24:18 | INFO  | Task 3977533b-9e98-4608-8f01-a4aa4a5a8802 is in state SUCCESS 2025-06-02 13:24:18.780890 | orchestrator | }}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ml01Mm-Eihy-BhtQ-obSe-5JAz-Lx7n-weQK6q', 'scsi-0QEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27', 'scsi-SQEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780923 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JX0vgn-GkDc-wZzb-ThgS-dL5d-eR68-PmwQqJ', 'scsi-0QEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5', 'scsi-SQEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780932 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.780940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85', 'scsi-SQEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780957 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.780978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c', 'dm-uuid-LVM-lBPkncHf05z5HtoBkcX1eg1pWuqTRftdQebFih2hGl3yDJNEoA7jtK3elwOXvHPl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780987 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e', 'dm-uuid-LVM-DgHct2KkEKM5qxlUlYXVA6wsYZuSilPpcv1aL2fQ0o39nUSiMJGAmAVSgIxcjGRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.780995 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781015 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781102 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781139 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FN1XK3-XZ3w-OvDj-rY2x-MbI7-9UjC-5ttYQq', 'scsi-0QEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de', 'scsi-SQEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6qZlhN-Fcn9-PIrP-8M7s-Rgq5-4H2D-VocVNU', 'scsi-0QEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855', 'scsi-SQEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da', 'scsi-SQEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:24:18.781195 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.781204 | orchestrator | 2025-06-02 13:24:18.781212 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 13:24:18.781220 | orchestrator | Monday 02 June 2025 13:14:17 +0000 (0:00:01.358) 0:00:29.952 *********** 2025-06-02 13:24:18.781228 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.781236 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.781244 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.781255 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.781263 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.781271 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.781279 | orchestrator | 2025-06-02 13:24:18.781287 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 13:24:18.781295 | orchestrator | Monday 02 June 2025 13:14:18 +0000 (0:00:01.579) 0:00:31.531 *********** 2025-06-02 13:24:18.781302 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.781310 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.781318 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.781326 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.781333 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.781341 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.781349 | orchestrator | 2025-06-02 13:24:18.781356 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 13:24:18.781364 | orchestrator | Monday 02 June 2025 13:14:19 +0000 (0:00:00.521) 0:00:32.053 *********** 2025-06-02 13:24:18.781372 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.781380 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.781387 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.781395 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.781403 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.781410 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.781418 | orchestrator | 2025-06-02 13:24:18.781426 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 13:24:18.781434 | orchestrator | Monday 02 June 2025 13:14:19 +0000 (0:00:00.559) 0:00:32.612 *********** 2025-06-02 13:24:18.781442 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.781449 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.781457 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.781465 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.781472 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.781480 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.781488 | orchestrator | 2025-06-02 13:24:18.781496 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 13:24:18.781505 | orchestrator | Monday 02 June 2025 13:14:20 +0000 (0:00:00.582) 0:00:33.195 *********** 2025-06-02 13:24:18.781514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.781523 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.781532 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.781541 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.781549 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.781558 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.781567 | orchestrator | 2025-06-02 13:24:18.781576 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 13:24:18.781585 | orchestrator | Monday 02 June 2025 13:14:21 +0000 (0:00:01.102) 0:00:34.298 *********** 2025-06-02 13:24:18.781594 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.781603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.781612 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.781625 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.781634 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.781643 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.781652 | orchestrator | 2025-06-02 13:24:18.781665 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 13:24:18.781674 | orchestrator | Monday 02 June 2025 13:14:22 +0000 (0:00:00.570) 0:00:34.869 *********** 2025-06-02 13:24:18.781683 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:24:18.781692 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-02 13:24:18.781701 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-02 13:24:18.781709 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-02 13:24:18.781718 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 13:24:18.781727 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 13:24:18.781736 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-02 13:24:18.781745 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-02 13:24:18.781754 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 13:24:18.781763 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 13:24:18.781772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 13:24:18.781780 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-02 13:24:18.781789 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 13:24:18.781798 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 13:24:18.781807 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 13:24:18.781816 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 13:24:18.781824 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 13:24:18.781833 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 13:24:18.781842 | orchestrator | 2025-06-02 13:24:18.781851 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 13:24:18.781859 | orchestrator | Monday 02 June 2025 13:14:24 +0000 (0:00:02.776) 0:00:37.645 *********** 2025-06-02 13:24:18.781867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 13:24:18.781875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 13:24:18.781883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 13:24:18.781890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.781898 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 13:24:18.781906 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 13:24:18.781913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 13:24:18.781921 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.781929 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 13:24:18.781937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 13:24:18.781945 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 13:24:18.781952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 13:24:18.781964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 13:24:18.781972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 13:24:18.781979 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.781987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 13:24:18.781995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 13:24:18.782003 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 13:24:18.782010 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.782182 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.782195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 13:24:18.782209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 13:24:18.782217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 13:24:18.782225 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.782233 | orchestrator | 2025-06-02 13:24:18.782242 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 13:24:18.782250 | orchestrator | Monday 02 June 2025 13:14:25 +0000 (0:00:00.568) 0:00:38.214 *********** 2025-06-02 13:24:18.782258 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.782266 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.782274 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.782283 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.782291 | orchestrator | 2025-06-02 13:24:18.782299 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 13:24:18.782308 | orchestrator | Monday 02 June 2025 13:14:26 +0000 (0:00:01.160) 0:00:39.374 *********** 2025-06-02 13:24:18.782316 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.782324 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.782332 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.782340 | orchestrator | 2025-06-02 13:24:18.782349 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 13:24:18.782357 | orchestrator | Monday 02 June 2025 13:14:27 +0000 (0:00:00.338) 0:00:39.713 *********** 2025-06-02 13:24:18.782365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.782373 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.782381 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.782389 | orchestrator | 2025-06-02 13:24:18.782397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 13:24:18.782405 | orchestrator | Monday 02 June 2025 13:14:27 +0000 (0:00:00.456) 0:00:40.170 *********** 2025-06-02 13:24:18.782414 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.782422 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.782430 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.782439 | orchestrator | 2025-06-02 13:24:18.782447 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 13:24:18.782461 | orchestrator | Monday 02 June 2025 13:14:27 +0000 (0:00:00.340) 0:00:40.511 *********** 2025-06-02 13:24:18.782469 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.782478 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.782486 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.782495 | orchestrator | 2025-06-02 13:24:18.782503 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 13:24:18.782512 | orchestrator | Monday 02 June 2025 13:14:28 +0000 (0:00:00.610) 0:00:41.122 *********** 2025-06-02 13:24:18.782538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.782546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.782554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.782562 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.782571 | orchestrator | 2025-06-02 13:24:18.782580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 13:24:18.782588 | orchestrator | Monday 02 June 2025 13:14:28 +0000 (0:00:00.314) 0:00:41.436 *********** 2025-06-02 13:24:18.782597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.782605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.782613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.782622 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.782631 | orchestrator | 2025-06-02 13:24:18.782639 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 13:24:18.782646 | orchestrator | Monday 02 June 2025 13:14:29 +0000 (0:00:00.331) 0:00:41.768 *********** 2025-06-02 13:24:18.782657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.782665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.782672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.782679 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.782686 | orchestrator | 2025-06-02 13:24:18.782694 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 13:24:18.782701 | orchestrator | Monday 02 June 2025 13:14:29 +0000 (0:00:00.539) 0:00:42.307 *********** 2025-06-02 13:24:18.782708 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.782715 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.782722 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.782730 | orchestrator | 2025-06-02 13:24:18.782737 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 13:24:18.782744 | orchestrator | Monday 02 June 2025 13:14:30 +0000 (0:00:00.502) 0:00:42.809 *********** 2025-06-02 13:24:18.782751 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 13:24:18.782758 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 13:24:18.782766 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 13:24:18.782773 | orchestrator | 2025-06-02 13:24:18.782780 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 13:24:18.782788 | orchestrator | Monday 02 June 2025 13:14:31 +0000 (0:00:01.114) 0:00:43.924 *********** 2025-06-02 13:24:18.782820 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:24:18.782830 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:24:18.782838 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:24:18.782847 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 13:24:18.782855 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 13:24:18.782864 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 13:24:18.782872 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 13:24:18.782880 | orchestrator | 2025-06-02 13:24:18.782888 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 13:24:18.782897 | orchestrator | Monday 02 June 2025 13:14:32 +0000 (0:00:00.855) 0:00:44.779 *********** 2025-06-02 13:24:18.782905 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:24:18.782913 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:24:18.782921 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:24:18.782929 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 13:24:18.782936 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 13:24:18.782945 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 13:24:18.782953 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 13:24:18.782961 | orchestrator | 2025-06-02 13:24:18.782969 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 13:24:18.782978 | orchestrator | Monday 02 June 2025 13:14:33 +0000 (0:00:01.898) 0:00:46.678 *********** 2025-06-02 13:24:18.782986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.782996 | orchestrator | 2025-06-02 13:24:18.783004 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 13:24:18.783012 | orchestrator | Monday 02 June 2025 13:14:35 +0000 (0:00:01.166) 0:00:47.844 *********** 2025-06-02 13:24:18.783038 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.783045 | orchestrator | 2025-06-02 13:24:18.783055 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 13:24:18.783062 | orchestrator | Monday 02 June 2025 13:14:36 +0000 (0:00:01.738) 0:00:49.582 *********** 2025-06-02 13:24:18.783085 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.783098 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.783110 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.783121 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.783130 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.783137 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.783144 | orchestrator | 2025-06-02 13:24:18.783150 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 13:24:18.783157 | orchestrator | Monday 02 June 2025 13:14:38 +0000 (0:00:01.150) 0:00:50.733 *********** 2025-06-02 13:24:18.783163 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783170 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783177 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783183 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783190 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783196 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783203 | orchestrator | 2025-06-02 13:24:18.783209 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 13:24:18.783216 | orchestrator | Monday 02 June 2025 13:14:39 +0000 (0:00:01.517) 0:00:52.251 *********** 2025-06-02 13:24:18.783222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783229 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783236 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783242 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783249 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783255 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783262 | orchestrator | 2025-06-02 13:24:18.783269 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 13:24:18.783275 | orchestrator | Monday 02 June 2025 13:14:41 +0000 (0:00:01.554) 0:00:53.805 *********** 2025-06-02 13:24:18.783282 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783288 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783295 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783301 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783308 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783314 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783321 | orchestrator | 2025-06-02 13:24:18.783327 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 13:24:18.783334 | orchestrator | Monday 02 June 2025 13:14:42 +0000 (0:00:01.122) 0:00:54.928 *********** 2025-06-02 13:24:18.783340 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.783347 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.783353 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.783360 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.783366 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.783373 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.783380 | orchestrator | 2025-06-02 13:24:18.783386 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 13:24:18.783393 | orchestrator | Monday 02 June 2025 13:14:43 +0000 (0:00:01.036) 0:00:55.964 *********** 2025-06-02 13:24:18.783423 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783431 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783438 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783444 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.783451 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.783457 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.783464 | orchestrator | 2025-06-02 13:24:18.783470 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 13:24:18.783485 | orchestrator | Monday 02 June 2025 13:14:43 +0000 (0:00:00.621) 0:00:56.586 *********** 2025-06-02 13:24:18.783492 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783498 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783505 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783512 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.783518 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.783525 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.783531 | orchestrator | 2025-06-02 13:24:18.783538 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 13:24:18.783544 | orchestrator | Monday 02 June 2025 13:14:45 +0000 (0:00:01.309) 0:00:57.896 *********** 2025-06-02 13:24:18.783551 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.783557 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.783564 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.783570 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783577 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783583 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783590 | orchestrator | 2025-06-02 13:24:18.783596 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 13:24:18.783603 | orchestrator | Monday 02 June 2025 13:14:46 +0000 (0:00:01.488) 0:00:59.385 *********** 2025-06-02 13:24:18.783609 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.783616 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.783622 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.783629 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783635 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783642 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783648 | orchestrator | 2025-06-02 13:24:18.783655 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 13:24:18.783662 | orchestrator | Monday 02 June 2025 13:14:48 +0000 (0:00:01.557) 0:01:00.942 *********** 2025-06-02 13:24:18.783669 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783675 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783682 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783688 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.783695 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.783701 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.783708 | orchestrator | 2025-06-02 13:24:18.783714 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 13:24:18.783721 | orchestrator | Monday 02 June 2025 13:14:48 +0000 (0:00:00.586) 0:01:01.528 *********** 2025-06-02 13:24:18.783727 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.783734 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.783740 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.783747 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.783753 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.783764 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.783770 | orchestrator | 2025-06-02 13:24:18.783777 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 13:24:18.783784 | orchestrator | Monday 02 June 2025 13:14:49 +0000 (0:00:00.718) 0:01:02.247 *********** 2025-06-02 13:24:18.783790 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783797 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783803 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783810 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783816 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783823 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783830 | orchestrator | 2025-06-02 13:24:18.783836 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 13:24:18.783843 | orchestrator | Monday 02 June 2025 13:14:50 +0000 (0:00:00.645) 0:01:02.893 *********** 2025-06-02 13:24:18.783849 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783856 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783866 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783873 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783879 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783886 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783892 | orchestrator | 2025-06-02 13:24:18.783899 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 13:24:18.783906 | orchestrator | Monday 02 June 2025 13:14:51 +0000 (0:00:00.852) 0:01:03.745 *********** 2025-06-02 13:24:18.783912 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783919 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783925 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783931 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.783938 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.783944 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.783951 | orchestrator | 2025-06-02 13:24:18.783958 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 13:24:18.783964 | orchestrator | Monday 02 June 2025 13:14:51 +0000 (0:00:00.610) 0:01:04.356 *********** 2025-06-02 13:24:18.783971 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.783977 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.783984 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.783990 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.783996 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784003 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784009 | orchestrator | 2025-06-02 13:24:18.784016 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 13:24:18.784022 | orchestrator | Monday 02 June 2025 13:14:52 +0000 (0:00:00.735) 0:01:05.092 *********** 2025-06-02 13:24:18.784029 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.784035 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.784042 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.784048 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.784055 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784062 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784082 | orchestrator | 2025-06-02 13:24:18.784090 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 13:24:18.784116 | orchestrator | Monday 02 June 2025 13:14:53 +0000 (0:00:00.622) 0:01:05.715 *********** 2025-06-02 13:24:18.784124 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.784130 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.784137 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.784143 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.784150 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784157 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784163 | orchestrator | 2025-06-02 13:24:18.784170 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 13:24:18.784176 | orchestrator | Monday 02 June 2025 13:14:53 +0000 (0:00:00.980) 0:01:06.695 *********** 2025-06-02 13:24:18.784183 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.784190 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.784196 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.784203 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.784209 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.784216 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.784222 | orchestrator | 2025-06-02 13:24:18.784229 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 13:24:18.784235 | orchestrator | Monday 02 June 2025 13:14:54 +0000 (0:00:00.626) 0:01:07.322 *********** 2025-06-02 13:24:18.784242 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.784249 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.784255 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.784262 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.784268 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.784275 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.784285 | orchestrator | 2025-06-02 13:24:18.784292 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-02 13:24:18.784299 | orchestrator | Monday 02 June 2025 13:14:55 +0000 (0:00:01.258) 0:01:08.581 *********** 2025-06-02 13:24:18.784305 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.784312 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.784318 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.784325 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.784331 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.784338 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.784344 | orchestrator | 2025-06-02 13:24:18.784351 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-02 13:24:18.784358 | orchestrator | Monday 02 June 2025 13:14:57 +0000 (0:00:01.829) 0:01:10.411 *********** 2025-06-02 13:24:18.784364 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.784371 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.784377 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.784384 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.784390 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.784397 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.784403 | orchestrator | 2025-06-02 13:24:18.784410 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-02 13:24:18.784416 | orchestrator | Monday 02 June 2025 13:14:59 +0000 (0:00:02.028) 0:01:12.439 *********** 2025-06-02 13:24:18.784426 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.784433 | orchestrator | 2025-06-02 13:24:18.784439 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-02 13:24:18.784446 | orchestrator | Monday 02 June 2025 13:15:00 +0000 (0:00:01.125) 0:01:13.565 *********** 2025-06-02 13:24:18.784453 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.784459 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.784466 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.784472 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.784478 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784485 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784491 | orchestrator | 2025-06-02 13:24:18.784498 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-02 13:24:18.784505 | orchestrator | Monday 02 June 2025 13:15:01 +0000 (0:00:00.749) 0:01:14.315 *********** 2025-06-02 13:24:18.784511 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.784518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.784524 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.784531 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.784537 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784544 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784550 | orchestrator | 2025-06-02 13:24:18.784557 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-02 13:24:18.784563 | orchestrator | Monday 02 June 2025 13:15:02 +0000 (0:00:00.523) 0:01:14.838 *********** 2025-06-02 13:24:18.784570 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 13:24:18.784576 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 13:24:18.784583 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 13:24:18.784589 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 13:24:18.784596 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 13:24:18.784603 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 13:24:18.784609 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 13:24:18.784620 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 13:24:18.784627 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 13:24:18.784633 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 13:24:18.784640 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 13:24:18.784646 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 13:24:18.784653 | orchestrator | 2025-06-02 13:24:18.784677 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-02 13:24:18.784685 | orchestrator | Monday 02 June 2025 13:15:03 +0000 (0:00:01.562) 0:01:16.400 *********** 2025-06-02 13:24:18.784691 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.784698 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.784705 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.784711 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.784718 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.784724 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.784731 | orchestrator | 2025-06-02 13:24:18.784737 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-02 13:24:18.784744 | orchestrator | Monday 02 June 2025 13:15:04 +0000 (0:00:01.012) 0:01:17.413 *********** 2025-06-02 13:24:18.784750 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.784757 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.784764 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.784770 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.784777 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784783 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784789 | orchestrator | 2025-06-02 13:24:18.784796 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-02 13:24:18.784803 | orchestrator | Monday 02 June 2025 13:15:05 +0000 (0:00:00.812) 0:01:18.225 *********** 2025-06-02 13:24:18.784809 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.784816 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.784822 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.784829 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.784836 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784849 | orchestrator | 2025-06-02 13:24:18.784855 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-02 13:24:18.784862 | orchestrator | Monday 02 June 2025 13:15:06 +0000 (0:00:00.563) 0:01:18.789 *********** 2025-06-02 13:24:18.784868 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.784875 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.784881 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.784888 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.784894 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.784901 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.784907 | orchestrator | 2025-06-02 13:24:18.784914 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-02 13:24:18.784921 | orchestrator | Monday 02 June 2025 13:15:06 +0000 (0:00:00.761) 0:01:19.551 *********** 2025-06-02 13:24:18.784927 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.784934 | orchestrator | 2025-06-02 13:24:18.784941 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-02 13:24:18.784947 | orchestrator | Monday 02 June 2025 13:15:07 +0000 (0:00:01.128) 0:01:20.679 *********** 2025-06-02 13:24:18.784957 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.784963 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.784970 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.784981 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.784988 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.784994 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.785001 | orchestrator | 2025-06-02 13:24:18.785007 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-02 13:24:18.785014 | orchestrator | Monday 02 June 2025 13:16:00 +0000 (0:00:52.253) 0:02:12.932 *********** 2025-06-02 13:24:18.785021 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 13:24:18.785027 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 13:24:18.785034 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 13:24:18.785040 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785047 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 13:24:18.785053 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 13:24:18.785060 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 13:24:18.785066 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785111 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 13:24:18.785118 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 13:24:18.785124 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 13:24:18.785131 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785138 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 13:24:18.785144 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 13:24:18.785151 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 13:24:18.785157 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785164 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 13:24:18.785171 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 13:24:18.785177 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 13:24:18.785184 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785190 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 13:24:18.785197 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 13:24:18.785203 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 13:24:18.785231 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785239 | orchestrator | 2025-06-02 13:24:18.785246 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-02 13:24:18.785252 | orchestrator | Monday 02 June 2025 13:16:01 +0000 (0:00:00.821) 0:02:13.754 *********** 2025-06-02 13:24:18.785259 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785265 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785272 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785278 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785285 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785292 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785298 | orchestrator | 2025-06-02 13:24:18.785305 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-02 13:24:18.785312 | orchestrator | Monday 02 June 2025 13:16:01 +0000 (0:00:00.522) 0:02:14.277 *********** 2025-06-02 13:24:18.785318 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785325 | orchestrator | 2025-06-02 13:24:18.785331 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-02 13:24:18.785338 | orchestrator | Monday 02 June 2025 13:16:01 +0000 (0:00:00.135) 0:02:14.412 *********** 2025-06-02 13:24:18.785349 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785356 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785363 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785369 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785376 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785382 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785389 | orchestrator | 2025-06-02 13:24:18.785396 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-02 13:24:18.785402 | orchestrator | Monday 02 June 2025 13:16:02 +0000 (0:00:00.733) 0:02:15.146 *********** 2025-06-02 13:24:18.785409 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785415 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785422 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785428 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785435 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785441 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785448 | orchestrator | 2025-06-02 13:24:18.785455 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-02 13:24:18.785476 | orchestrator | Monday 02 June 2025 13:16:03 +0000 (0:00:00.590) 0:02:15.736 *********** 2025-06-02 13:24:18.785483 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785490 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785496 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785503 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785509 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785516 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785522 | orchestrator | 2025-06-02 13:24:18.785529 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-02 13:24:18.785536 | orchestrator | Monday 02 June 2025 13:16:04 +0000 (0:00:00.987) 0:02:16.723 *********** 2025-06-02 13:24:18.785542 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.785549 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.785556 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.785566 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.785572 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.785579 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.785586 | orchestrator | 2025-06-02 13:24:18.785592 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-02 13:24:18.785599 | orchestrator | Monday 02 June 2025 13:16:06 +0000 (0:00:02.095) 0:02:18.818 *********** 2025-06-02 13:24:18.785606 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.785612 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.785619 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.785626 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.785632 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.785638 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.785644 | orchestrator | 2025-06-02 13:24:18.785650 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-02 13:24:18.785657 | orchestrator | Monday 02 June 2025 13:16:07 +0000 (0:00:00.960) 0:02:19.779 *********** 2025-06-02 13:24:18.785663 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.785670 | orchestrator | 2025-06-02 13:24:18.785676 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-02 13:24:18.785683 | orchestrator | Monday 02 June 2025 13:16:08 +0000 (0:00:01.205) 0:02:20.984 *********** 2025-06-02 13:24:18.785689 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785695 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785701 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785707 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785713 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785719 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785726 | orchestrator | 2025-06-02 13:24:18.785732 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-02 13:24:18.785744 | orchestrator | Monday 02 June 2025 13:16:08 +0000 (0:00:00.474) 0:02:21.458 *********** 2025-06-02 13:24:18.785750 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785756 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785762 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785769 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785775 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785781 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785787 | orchestrator | 2025-06-02 13:24:18.785793 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-02 13:24:18.785799 | orchestrator | Monday 02 June 2025 13:16:09 +0000 (0:00:00.649) 0:02:22.108 *********** 2025-06-02 13:24:18.785805 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785812 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785818 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785824 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785830 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785836 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785842 | orchestrator | 2025-06-02 13:24:18.785865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-02 13:24:18.785873 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.666) 0:02:22.774 *********** 2025-06-02 13:24:18.785879 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785885 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785891 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785897 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785903 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785909 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785915 | orchestrator | 2025-06-02 13:24:18.785921 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-02 13:24:18.785928 | orchestrator | Monday 02 June 2025 13:16:10 +0000 (0:00:00.633) 0:02:23.408 *********** 2025-06-02 13:24:18.785934 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.785945 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.785952 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.785958 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.785965 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.785971 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.785977 | orchestrator | 2025-06-02 13:24:18.785983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-02 13:24:18.785989 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.534) 0:02:23.942 *********** 2025-06-02 13:24:18.785995 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.786001 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.786007 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.786013 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.786037 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.786044 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.786050 | orchestrator | 2025-06-02 13:24:18.786056 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-02 13:24:18.786062 | orchestrator | Monday 02 June 2025 13:16:11 +0000 (0:00:00.646) 0:02:24.589 *********** 2025-06-02 13:24:18.786080 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.786087 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.786093 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.786099 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.786105 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.786111 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.786117 | orchestrator | 2025-06-02 13:24:18.786124 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-02 13:24:18.786130 | orchestrator | Monday 02 June 2025 13:16:12 +0000 (0:00:00.586) 0:02:25.177 *********** 2025-06-02 13:24:18.786140 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.786146 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.786152 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.786158 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.786164 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.786170 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.786176 | orchestrator | 2025-06-02 13:24:18.786182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-02 13:24:18.786188 | orchestrator | Monday 02 June 2025 13:16:13 +0000 (0:00:00.702) 0:02:25.879 *********** 2025-06-02 13:24:18.786198 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.786204 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.786210 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.786216 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.786222 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.786228 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.786234 | orchestrator | 2025-06-02 13:24:18.786240 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-02 13:24:18.786247 | orchestrator | Monday 02 June 2025 13:16:14 +0000 (0:00:01.124) 0:02:27.003 *********** 2025-06-02 13:24:18.786257 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.786268 | orchestrator | 2025-06-02 13:24:18.786284 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-02 13:24:18.786295 | orchestrator | Monday 02 June 2025 13:16:15 +0000 (0:00:01.331) 0:02:28.335 *********** 2025-06-02 13:24:18.786304 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-02 13:24:18.786313 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-02 13:24:18.786322 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-02 13:24:18.786332 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-02 13:24:18.786343 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-02 13:24:18.786354 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-02 13:24:18.786364 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-02 13:24:18.786375 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-02 13:24:18.786381 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-02 13:24:18.786387 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-02 13:24:18.786393 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-02 13:24:18.786399 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-02 13:24:18.786405 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-02 13:24:18.786411 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-02 13:24:18.786418 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-02 13:24:18.786424 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-02 13:24:18.786430 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-02 13:24:18.786436 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-02 13:24:18.786442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-02 13:24:18.786448 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-02 13:24:18.786454 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-02 13:24:18.786484 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-02 13:24:18.786491 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-02 13:24:18.786498 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-02 13:24:18.786504 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-02 13:24:18.786510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-02 13:24:18.786522 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-02 13:24:18.786528 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-02 13:24:18.786534 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-02 13:24:18.786540 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-02 13:24:18.786546 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-02 13:24:18.786552 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-02 13:24:18.786558 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-02 13:24:18.786564 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-02 13:24:18.786570 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-02 13:24:18.786576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-02 13:24:18.786582 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-02 13:24:18.786588 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-02 13:24:18.786594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-02 13:24:18.786600 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-02 13:24:18.786606 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-02 13:24:18.786612 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-02 13:24:18.786618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-02 13:24:18.786624 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-02 13:24:18.786630 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 13:24:18.786636 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 13:24:18.786642 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-02 13:24:18.786648 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-02 13:24:18.786654 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 13:24:18.786660 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-02 13:24:18.786666 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 13:24:18.786672 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-02 13:24:18.786682 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 13:24:18.786688 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 13:24:18.786694 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 13:24:18.786700 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 13:24:18.786707 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 13:24:18.786713 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 13:24:18.786719 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 13:24:18.786725 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 13:24:18.786731 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 13:24:18.786737 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 13:24:18.786743 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 13:24:18.786749 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 13:24:18.786755 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 13:24:18.786761 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 13:24:18.786767 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 13:24:18.786779 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 13:24:18.786785 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 13:24:18.786791 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 13:24:18.786797 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 13:24:18.786803 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 13:24:18.786809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 13:24:18.786815 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 13:24:18.786821 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 13:24:18.786827 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 13:24:18.786833 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 13:24:18.786839 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 13:24:18.786845 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 13:24:18.786868 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 13:24:18.786875 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-02 13:24:18.786882 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 13:24:18.786888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 13:24:18.786894 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 13:24:18.786900 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-02 13:24:18.786906 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-02 13:24:18.786912 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 13:24:18.786918 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-02 13:24:18.786924 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-02 13:24:18.786930 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-02 13:24:18.786936 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-02 13:24:18.786942 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-02 13:24:18.786948 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-02 13:24:18.786955 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-02 13:24:18.786960 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-02 13:24:18.786967 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-02 13:24:18.786973 | orchestrator | 2025-06-02 13:24:18.786979 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-02 13:24:18.786985 | orchestrator | Monday 02 June 2025 13:16:21 +0000 (0:00:06.186) 0:02:34.522 *********** 2025-06-02 13:24:18.786991 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.786997 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787003 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787009 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.787015 | orchestrator | 2025-06-02 13:24:18.787021 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-02 13:24:18.787027 | orchestrator | Monday 02 June 2025 13:16:22 +0000 (0:00:01.010) 0:02:35.533 *********** 2025-06-02 13:24:18.787034 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787040 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787053 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787059 | orchestrator | 2025-06-02 13:24:18.787065 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-02 13:24:18.787085 | orchestrator | Monday 02 June 2025 13:16:23 +0000 (0:00:00.664) 0:02:36.197 *********** 2025-06-02 13:24:18.787091 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787097 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787103 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787110 | orchestrator | 2025-06-02 13:24:18.787116 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-02 13:24:18.787122 | orchestrator | Monday 02 June 2025 13:16:24 +0000 (0:00:01.442) 0:02:37.640 *********** 2025-06-02 13:24:18.787128 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787134 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787140 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787146 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.787152 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.787159 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.787165 | orchestrator | 2025-06-02 13:24:18.787171 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-02 13:24:18.787177 | orchestrator | Monday 02 June 2025 13:16:25 +0000 (0:00:00.740) 0:02:38.380 *********** 2025-06-02 13:24:18.787183 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787189 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787195 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787201 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.787207 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.787213 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.787220 | orchestrator | 2025-06-02 13:24:18.787226 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-02 13:24:18.787232 | orchestrator | Monday 02 June 2025 13:16:26 +0000 (0:00:00.848) 0:02:39.229 *********** 2025-06-02 13:24:18.787238 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787244 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787250 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787256 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787262 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787268 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787274 | orchestrator | 2025-06-02 13:24:18.787281 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-02 13:24:18.787287 | orchestrator | Monday 02 June 2025 13:16:27 +0000 (0:00:00.743) 0:02:39.972 *********** 2025-06-02 13:24:18.787293 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787299 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787322 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787330 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787336 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787342 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787348 | orchestrator | 2025-06-02 13:24:18.787354 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-02 13:24:18.787360 | orchestrator | Monday 02 June 2025 13:16:28 +0000 (0:00:00.859) 0:02:40.831 *********** 2025-06-02 13:24:18.787366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787372 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787378 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787384 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787390 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787400 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787406 | orchestrator | 2025-06-02 13:24:18.787413 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-02 13:24:18.787419 | orchestrator | Monday 02 June 2025 13:16:28 +0000 (0:00:00.633) 0:02:41.464 *********** 2025-06-02 13:24:18.787425 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787431 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787437 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787443 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787449 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787455 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787461 | orchestrator | 2025-06-02 13:24:18.787467 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-02 13:24:18.787473 | orchestrator | Monday 02 June 2025 13:16:29 +0000 (0:00:00.808) 0:02:42.273 *********** 2025-06-02 13:24:18.787479 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787486 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787492 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787498 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787504 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787510 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787516 | orchestrator | 2025-06-02 13:24:18.787522 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-02 13:24:18.787528 | orchestrator | Monday 02 June 2025 13:16:30 +0000 (0:00:00.675) 0:02:42.949 *********** 2025-06-02 13:24:18.787534 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787540 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787546 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787552 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787558 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787564 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787570 | orchestrator | 2025-06-02 13:24:18.787576 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-02 13:24:18.787582 | orchestrator | Monday 02 June 2025 13:16:30 +0000 (0:00:00.749) 0:02:43.698 *********** 2025-06-02 13:24:18.787588 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787594 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787603 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787609 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.787616 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.787622 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.787628 | orchestrator | 2025-06-02 13:24:18.787634 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-02 13:24:18.787640 | orchestrator | Monday 02 June 2025 13:16:33 +0000 (0:00:02.929) 0:02:46.628 *********** 2025-06-02 13:24:18.787646 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787652 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787658 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787664 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.787670 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.787676 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.787683 | orchestrator | 2025-06-02 13:24:18.787689 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-02 13:24:18.787695 | orchestrator | Monday 02 June 2025 13:16:34 +0000 (0:00:01.070) 0:02:47.698 *********** 2025-06-02 13:24:18.787701 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787707 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787713 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787719 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.787725 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.787731 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.787737 | orchestrator | 2025-06-02 13:24:18.787743 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-02 13:24:18.787753 | orchestrator | Monday 02 June 2025 13:16:35 +0000 (0:00:00.781) 0:02:48.480 *********** 2025-06-02 13:24:18.787759 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787765 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787771 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787777 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787783 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787789 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787795 | orchestrator | 2025-06-02 13:24:18.787801 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-02 13:24:18.787807 | orchestrator | Monday 02 June 2025 13:16:36 +0000 (0:00:01.038) 0:02:49.518 *********** 2025-06-02 13:24:18.787813 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787825 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787831 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787838 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787844 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.787850 | orchestrator | 2025-06-02 13:24:18.787856 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-02 13:24:18.787878 | orchestrator | Monday 02 June 2025 13:16:37 +0000 (0:00:00.804) 0:02:50.322 *********** 2025-06-02 13:24:18.787885 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787892 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787898 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.787905 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-02 13:24:18.787912 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-02 13:24:18.787919 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.787925 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-02 13:24:18.787931 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-02 13:24:18.787937 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.787944 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-02 13:24:18.787953 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-02 13:24:18.787963 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.787969 | orchestrator | 2025-06-02 13:24:18.787975 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-02 13:24:18.787981 | orchestrator | Monday 02 June 2025 13:16:38 +0000 (0:00:00.869) 0:02:51.191 *********** 2025-06-02 13:24:18.787987 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.787993 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.787999 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788005 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.788011 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.788017 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.788023 | orchestrator | 2025-06-02 13:24:18.788029 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-02 13:24:18.788035 | orchestrator | Monday 02 June 2025 13:16:39 +0000 (0:00:00.535) 0:02:51.727 *********** 2025-06-02 13:24:18.788041 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788047 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788053 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788059 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.788065 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.788082 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.788088 | orchestrator | 2025-06-02 13:24:18.788094 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 13:24:18.788101 | orchestrator | Monday 02 June 2025 13:16:39 +0000 (0:00:00.707) 0:02:52.434 *********** 2025-06-02 13:24:18.788107 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788113 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788119 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788125 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.788131 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.788138 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.788144 | orchestrator | 2025-06-02 13:24:18.788150 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 13:24:18.788156 | orchestrator | Monday 02 June 2025 13:16:40 +0000 (0:00:00.590) 0:02:53.025 *********** 2025-06-02 13:24:18.788162 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788175 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788181 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.788187 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.788193 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.788199 | orchestrator | 2025-06-02 13:24:18.788205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 13:24:18.788212 | orchestrator | Monday 02 June 2025 13:16:41 +0000 (0:00:00.731) 0:02:53.757 *********** 2025-06-02 13:24:18.788218 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788224 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788230 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788253 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.788260 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.788266 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.788272 | orchestrator | 2025-06-02 13:24:18.788278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 13:24:18.788284 | orchestrator | Monday 02 June 2025 13:16:41 +0000 (0:00:00.564) 0:02:54.321 *********** 2025-06-02 13:24:18.788290 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788296 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788303 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788309 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.788315 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.788321 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.788332 | orchestrator | 2025-06-02 13:24:18.788338 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 13:24:18.788344 | orchestrator | Monday 02 June 2025 13:16:42 +0000 (0:00:00.731) 0:02:55.053 *********** 2025-06-02 13:24:18.788350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 13:24:18.788356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 13:24:18.788362 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 13:24:18.788368 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788374 | orchestrator | 2025-06-02 13:24:18.788380 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 13:24:18.788387 | orchestrator | Monday 02 June 2025 13:16:42 +0000 (0:00:00.374) 0:02:55.427 *********** 2025-06-02 13:24:18.788393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 13:24:18.788399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 13:24:18.788405 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 13:24:18.788411 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788417 | orchestrator | 2025-06-02 13:24:18.788423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 13:24:18.788429 | orchestrator | Monday 02 June 2025 13:16:43 +0000 (0:00:00.381) 0:02:55.809 *********** 2025-06-02 13:24:18.788435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 13:24:18.788441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 13:24:18.788447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 13:24:18.788453 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788459 | orchestrator | 2025-06-02 13:24:18.788465 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 13:24:18.788472 | orchestrator | Monday 02 June 2025 13:16:43 +0000 (0:00:00.355) 0:02:56.165 *********** 2025-06-02 13:24:18.788478 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788484 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788490 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788496 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.788502 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.788508 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.788514 | orchestrator | 2025-06-02 13:24:18.788523 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 13:24:18.788529 | orchestrator | Monday 02 June 2025 13:16:44 +0000 (0:00:00.607) 0:02:56.772 *********** 2025-06-02 13:24:18.788535 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-02 13:24:18.788541 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788547 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-02 13:24:18.788553 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788560 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-02 13:24:18.788566 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788572 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 13:24:18.788578 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 13:24:18.788584 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 13:24:18.788590 | orchestrator | 2025-06-02 13:24:18.788596 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-02 13:24:18.788602 | orchestrator | Monday 02 June 2025 13:16:45 +0000 (0:00:01.768) 0:02:58.540 *********** 2025-06-02 13:24:18.788608 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.788614 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.788620 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.788626 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.788632 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.788638 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.788644 | orchestrator | 2025-06-02 13:24:18.788650 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 13:24:18.788661 | orchestrator | Monday 02 June 2025 13:16:48 +0000 (0:00:02.328) 0:03:00.869 *********** 2025-06-02 13:24:18.788668 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.788674 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.788680 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.788686 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.788692 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.788698 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.788704 | orchestrator | 2025-06-02 13:24:18.788710 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 13:24:18.788716 | orchestrator | Monday 02 June 2025 13:16:49 +0000 (0:00:00.983) 0:03:01.853 *********** 2025-06-02 13:24:18.788722 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.788728 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.788734 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.788740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.788746 | orchestrator | 2025-06-02 13:24:18.788753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 13:24:18.788759 | orchestrator | Monday 02 June 2025 13:16:49 +0000 (0:00:00.836) 0:03:02.690 *********** 2025-06-02 13:24:18.788765 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.788771 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.788777 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.788783 | orchestrator | 2025-06-02 13:24:18.788789 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 13:24:18.788811 | orchestrator | Monday 02 June 2025 13:16:50 +0000 (0:00:00.273) 0:03:02.963 *********** 2025-06-02 13:24:18.788818 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.788824 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.788830 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.788836 | orchestrator | 2025-06-02 13:24:18.788842 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 13:24:18.788848 | orchestrator | Monday 02 June 2025 13:16:51 +0000 (0:00:01.326) 0:03:04.289 *********** 2025-06-02 13:24:18.788854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 13:24:18.788861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 13:24:18.788867 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 13:24:18.788873 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788879 | orchestrator | 2025-06-02 13:24:18.788885 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 13:24:18.788891 | orchestrator | Monday 02 June 2025 13:16:52 +0000 (0:00:00.540) 0:03:04.830 *********** 2025-06-02 13:24:18.788897 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.788903 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.788909 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.788915 | orchestrator | 2025-06-02 13:24:18.788921 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 13:24:18.788927 | orchestrator | Monday 02 June 2025 13:16:52 +0000 (0:00:00.286) 0:03:05.117 *********** 2025-06-02 13:24:18.788933 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.788939 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.788945 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.788952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.788958 | orchestrator | 2025-06-02 13:24:18.788964 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 13:24:18.788970 | orchestrator | Monday 02 June 2025 13:16:53 +0000 (0:00:00.916) 0:03:06.034 *********** 2025-06-02 13:24:18.788976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.788982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.788988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.788998 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789004 | orchestrator | 2025-06-02 13:24:18.789010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 13:24:18.789016 | orchestrator | Monday 02 June 2025 13:16:53 +0000 (0:00:00.352) 0:03:06.386 *********** 2025-06-02 13:24:18.789022 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789028 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.789034 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.789040 | orchestrator | 2025-06-02 13:24:18.789046 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 13:24:18.789055 | orchestrator | Monday 02 June 2025 13:16:53 +0000 (0:00:00.276) 0:03:06.663 *********** 2025-06-02 13:24:18.789061 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789095 | orchestrator | 2025-06-02 13:24:18.789103 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 13:24:18.789110 | orchestrator | Monday 02 June 2025 13:16:54 +0000 (0:00:00.217) 0:03:06.880 *********** 2025-06-02 13:24:18.789116 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789122 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.789128 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.789134 | orchestrator | 2025-06-02 13:24:18.789140 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 13:24:18.789146 | orchestrator | Monday 02 June 2025 13:16:54 +0000 (0:00:00.244) 0:03:07.125 *********** 2025-06-02 13:24:18.789152 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789158 | orchestrator | 2025-06-02 13:24:18.789165 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 13:24:18.789171 | orchestrator | Monday 02 June 2025 13:16:54 +0000 (0:00:00.187) 0:03:07.313 *********** 2025-06-02 13:24:18.789177 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789183 | orchestrator | 2025-06-02 13:24:18.789189 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 13:24:18.789195 | orchestrator | Monday 02 June 2025 13:16:54 +0000 (0:00:00.191) 0:03:07.504 *********** 2025-06-02 13:24:18.789201 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789207 | orchestrator | 2025-06-02 13:24:18.789213 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 13:24:18.789219 | orchestrator | Monday 02 June 2025 13:16:55 +0000 (0:00:00.246) 0:03:07.751 *********** 2025-06-02 13:24:18.789225 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789231 | orchestrator | 2025-06-02 13:24:18.789238 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 13:24:18.789244 | orchestrator | Monday 02 June 2025 13:16:55 +0000 (0:00:00.183) 0:03:07.934 *********** 2025-06-02 13:24:18.789250 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789256 | orchestrator | 2025-06-02 13:24:18.789262 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 13:24:18.789268 | orchestrator | Monday 02 June 2025 13:16:55 +0000 (0:00:00.197) 0:03:08.132 *********** 2025-06-02 13:24:18.789274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.789280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.789286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.789293 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789299 | orchestrator | 2025-06-02 13:24:18.789305 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 13:24:18.789311 | orchestrator | Monday 02 June 2025 13:16:55 +0000 (0:00:00.374) 0:03:08.506 *********** 2025-06-02 13:24:18.789317 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789323 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.789329 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.789335 | orchestrator | 2025-06-02 13:24:18.789359 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 13:24:18.789371 | orchestrator | Monday 02 June 2025 13:16:56 +0000 (0:00:00.275) 0:03:08.782 *********** 2025-06-02 13:24:18.789377 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789383 | orchestrator | 2025-06-02 13:24:18.789389 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 13:24:18.789395 | orchestrator | Monday 02 June 2025 13:16:56 +0000 (0:00:00.195) 0:03:08.977 *********** 2025-06-02 13:24:18.789401 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789407 | orchestrator | 2025-06-02 13:24:18.789413 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 13:24:18.789420 | orchestrator | Monday 02 June 2025 13:16:56 +0000 (0:00:00.232) 0:03:09.210 *********** 2025-06-02 13:24:18.789426 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.789432 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.789438 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.789444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.789450 | orchestrator | 2025-06-02 13:24:18.789456 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 13:24:18.789462 | orchestrator | Monday 02 June 2025 13:16:57 +0000 (0:00:01.045) 0:03:10.256 *********** 2025-06-02 13:24:18.789468 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.789474 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.789480 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.789487 | orchestrator | 2025-06-02 13:24:18.789493 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 13:24:18.789499 | orchestrator | Monday 02 June 2025 13:16:57 +0000 (0:00:00.348) 0:03:10.605 *********** 2025-06-02 13:24:18.789505 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.789511 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.789517 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.789523 | orchestrator | 2025-06-02 13:24:18.789529 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 13:24:18.789535 | orchestrator | Monday 02 June 2025 13:16:59 +0000 (0:00:01.207) 0:03:11.813 *********** 2025-06-02 13:24:18.789541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.789548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.789554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.789560 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789566 | orchestrator | 2025-06-02 13:24:18.789572 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 13:24:18.789578 | orchestrator | Monday 02 June 2025 13:17:00 +0000 (0:00:01.011) 0:03:12.825 *********** 2025-06-02 13:24:18.789584 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.789590 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.789596 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.789602 | orchestrator | 2025-06-02 13:24:18.789609 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 13:24:18.789617 | orchestrator | Monday 02 June 2025 13:17:00 +0000 (0:00:00.324) 0:03:13.150 *********** 2025-06-02 13:24:18.789622 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.789628 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.789633 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.789638 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.789644 | orchestrator | 2025-06-02 13:24:18.789649 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 13:24:18.789654 | orchestrator | Monday 02 June 2025 13:17:01 +0000 (0:00:01.073) 0:03:14.223 *********** 2025-06-02 13:24:18.789660 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.789665 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.789670 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.789675 | orchestrator | 2025-06-02 13:24:18.789686 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 13:24:18.789692 | orchestrator | Monday 02 June 2025 13:17:01 +0000 (0:00:00.325) 0:03:14.549 *********** 2025-06-02 13:24:18.789697 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.789703 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.789708 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.789713 | orchestrator | 2025-06-02 13:24:18.789718 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 13:24:18.789724 | orchestrator | Monday 02 June 2025 13:17:03 +0000 (0:00:01.274) 0:03:15.823 *********** 2025-06-02 13:24:18.789729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.789735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.789740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.789745 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789750 | orchestrator | 2025-06-02 13:24:18.789756 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 13:24:18.789761 | orchestrator | Monday 02 June 2025 13:17:04 +0000 (0:00:00.882) 0:03:16.706 *********** 2025-06-02 13:24:18.789766 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.789772 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.789777 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.789782 | orchestrator | 2025-06-02 13:24:18.789788 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-02 13:24:18.789793 | orchestrator | Monday 02 June 2025 13:17:04 +0000 (0:00:00.418) 0:03:17.125 *********** 2025-06-02 13:24:18.789799 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.789804 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.789809 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.789814 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789820 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.789825 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.789830 | orchestrator | 2025-06-02 13:24:18.789836 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 13:24:18.789841 | orchestrator | Monday 02 June 2025 13:17:05 +0000 (0:00:00.933) 0:03:18.058 *********** 2025-06-02 13:24:18.789860 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.789866 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.789872 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.789877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.789882 | orchestrator | 2025-06-02 13:24:18.789888 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 13:24:18.789893 | orchestrator | Monday 02 June 2025 13:17:06 +0000 (0:00:01.061) 0:03:19.120 *********** 2025-06-02 13:24:18.789899 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.789904 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.789909 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.789915 | orchestrator | 2025-06-02 13:24:18.789920 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 13:24:18.789925 | orchestrator | Monday 02 June 2025 13:17:06 +0000 (0:00:00.297) 0:03:19.417 *********** 2025-06-02 13:24:18.789931 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.789936 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.789941 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.789947 | orchestrator | 2025-06-02 13:24:18.789952 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 13:24:18.789957 | orchestrator | Monday 02 June 2025 13:17:07 +0000 (0:00:01.175) 0:03:20.593 *********** 2025-06-02 13:24:18.789963 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 13:24:18.789968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 13:24:18.789973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 13:24:18.789982 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.789987 | orchestrator | 2025-06-02 13:24:18.789993 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 13:24:18.789998 | orchestrator | Monday 02 June 2025 13:17:08 +0000 (0:00:00.670) 0:03:21.263 *********** 2025-06-02 13:24:18.790003 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790009 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790037 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790044 | orchestrator | 2025-06-02 13:24:18.790049 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-02 13:24:18.790055 | orchestrator | 2025-06-02 13:24:18.790060 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 13:24:18.790066 | orchestrator | Monday 02 June 2025 13:17:09 +0000 (0:00:00.641) 0:03:21.905 *********** 2025-06-02 13:24:18.790081 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.790086 | orchestrator | 2025-06-02 13:24:18.790092 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 13:24:18.790097 | orchestrator | Monday 02 June 2025 13:17:09 +0000 (0:00:00.417) 0:03:22.323 *********** 2025-06-02 13:24:18.790105 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.790111 | orchestrator | 2025-06-02 13:24:18.790116 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 13:24:18.790122 | orchestrator | Monday 02 June 2025 13:17:10 +0000 (0:00:00.594) 0:03:22.917 *********** 2025-06-02 13:24:18.790127 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790133 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790138 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790143 | orchestrator | 2025-06-02 13:24:18.790149 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 13:24:18.790154 | orchestrator | Monday 02 June 2025 13:17:10 +0000 (0:00:00.649) 0:03:23.566 *********** 2025-06-02 13:24:18.790160 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790165 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790170 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790176 | orchestrator | 2025-06-02 13:24:18.790181 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 13:24:18.790186 | orchestrator | Monday 02 June 2025 13:17:11 +0000 (0:00:00.301) 0:03:23.868 *********** 2025-06-02 13:24:18.790192 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790197 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790202 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790208 | orchestrator | 2025-06-02 13:24:18.790213 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 13:24:18.790219 | orchestrator | Monday 02 June 2025 13:17:11 +0000 (0:00:00.252) 0:03:24.121 *********** 2025-06-02 13:24:18.790224 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790229 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790235 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790240 | orchestrator | 2025-06-02 13:24:18.790245 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 13:24:18.790251 | orchestrator | Monday 02 June 2025 13:17:11 +0000 (0:00:00.410) 0:03:24.531 *********** 2025-06-02 13:24:18.790256 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790262 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790267 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790272 | orchestrator | 2025-06-02 13:24:18.790278 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 13:24:18.790283 | orchestrator | Monday 02 June 2025 13:17:12 +0000 (0:00:00.763) 0:03:25.294 *********** 2025-06-02 13:24:18.790289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790294 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790303 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790308 | orchestrator | 2025-06-02 13:24:18.790314 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 13:24:18.790319 | orchestrator | Monday 02 June 2025 13:17:12 +0000 (0:00:00.284) 0:03:25.578 *********** 2025-06-02 13:24:18.790324 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790330 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790335 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790340 | orchestrator | 2025-06-02 13:24:18.790346 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 13:24:18.790367 | orchestrator | Monday 02 June 2025 13:17:13 +0000 (0:00:00.252) 0:03:25.831 *********** 2025-06-02 13:24:18.790374 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790379 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790385 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790390 | orchestrator | 2025-06-02 13:24:18.790395 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 13:24:18.790401 | orchestrator | Monday 02 June 2025 13:17:13 +0000 (0:00:00.838) 0:03:26.669 *********** 2025-06-02 13:24:18.790406 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790411 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790417 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790422 | orchestrator | 2025-06-02 13:24:18.790428 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 13:24:18.790433 | orchestrator | Monday 02 June 2025 13:17:14 +0000 (0:00:00.678) 0:03:27.348 *********** 2025-06-02 13:24:18.790438 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790444 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790449 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790454 | orchestrator | 2025-06-02 13:24:18.790460 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 13:24:18.790465 | orchestrator | Monday 02 June 2025 13:17:14 +0000 (0:00:00.308) 0:03:27.656 *********** 2025-06-02 13:24:18.790470 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790476 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790481 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790486 | orchestrator | 2025-06-02 13:24:18.790492 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 13:24:18.790497 | orchestrator | Monday 02 June 2025 13:17:15 +0000 (0:00:00.324) 0:03:27.981 *********** 2025-06-02 13:24:18.790502 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790507 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790513 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790518 | orchestrator | 2025-06-02 13:24:18.790523 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 13:24:18.790529 | orchestrator | Monday 02 June 2025 13:17:15 +0000 (0:00:00.575) 0:03:28.557 *********** 2025-06-02 13:24:18.790534 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790539 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790545 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790550 | orchestrator | 2025-06-02 13:24:18.790555 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 13:24:18.790561 | orchestrator | Monday 02 June 2025 13:17:16 +0000 (0:00:00.310) 0:03:28.867 *********** 2025-06-02 13:24:18.790566 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790571 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790577 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790582 | orchestrator | 2025-06-02 13:24:18.790587 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 13:24:18.790592 | orchestrator | Monday 02 June 2025 13:17:16 +0000 (0:00:00.244) 0:03:29.111 *********** 2025-06-02 13:24:18.790598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790608 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790614 | orchestrator | 2025-06-02 13:24:18.790625 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 13:24:18.790630 | orchestrator | Monday 02 June 2025 13:17:16 +0000 (0:00:00.221) 0:03:29.332 *********** 2025-06-02 13:24:18.790636 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.790646 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.790652 | orchestrator | 2025-06-02 13:24:18.790657 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 13:24:18.790663 | orchestrator | Monday 02 June 2025 13:17:16 +0000 (0:00:00.346) 0:03:29.679 *********** 2025-06-02 13:24:18.790668 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790673 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790679 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790684 | orchestrator | 2025-06-02 13:24:18.790689 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 13:24:18.790695 | orchestrator | Monday 02 June 2025 13:17:17 +0000 (0:00:00.310) 0:03:29.989 *********** 2025-06-02 13:24:18.790700 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790705 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790711 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790716 | orchestrator | 2025-06-02 13:24:18.790721 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 13:24:18.790727 | orchestrator | Monday 02 June 2025 13:17:17 +0000 (0:00:00.321) 0:03:30.311 *********** 2025-06-02 13:24:18.790732 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790738 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790743 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790748 | orchestrator | 2025-06-02 13:24:18.790753 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-02 13:24:18.790759 | orchestrator | Monday 02 June 2025 13:17:18 +0000 (0:00:00.657) 0:03:30.969 *********** 2025-06-02 13:24:18.790764 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790770 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790775 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790780 | orchestrator | 2025-06-02 13:24:18.790786 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-02 13:24:18.790791 | orchestrator | Monday 02 June 2025 13:17:18 +0000 (0:00:00.298) 0:03:31.268 *********** 2025-06-02 13:24:18.790796 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.790802 | orchestrator | 2025-06-02 13:24:18.790807 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-02 13:24:18.790813 | orchestrator | Monday 02 June 2025 13:17:19 +0000 (0:00:00.491) 0:03:31.759 *********** 2025-06-02 13:24:18.790818 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.790823 | orchestrator | 2025-06-02 13:24:18.790829 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-02 13:24:18.790834 | orchestrator | Monday 02 June 2025 13:17:19 +0000 (0:00:00.121) 0:03:31.881 *********** 2025-06-02 13:24:18.790839 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 13:24:18.790845 | orchestrator | 2025-06-02 13:24:18.790864 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-02 13:24:18.790871 | orchestrator | Monday 02 June 2025 13:17:20 +0000 (0:00:01.208) 0:03:33.090 *********** 2025-06-02 13:24:18.790876 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790882 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790887 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790892 | orchestrator | 2025-06-02 13:24:18.790898 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-02 13:24:18.790903 | orchestrator | Monday 02 June 2025 13:17:20 +0000 (0:00:00.284) 0:03:33.374 *********** 2025-06-02 13:24:18.790908 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.790914 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.790919 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.790924 | orchestrator | 2025-06-02 13:24:18.790945 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-02 13:24:18.790951 | orchestrator | Monday 02 June 2025 13:17:20 +0000 (0:00:00.278) 0:03:33.653 *********** 2025-06-02 13:24:18.790956 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.790961 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.790967 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.790972 | orchestrator | 2025-06-02 13:24:18.790977 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-02 13:24:18.790983 | orchestrator | Monday 02 June 2025 13:17:22 +0000 (0:00:01.147) 0:03:34.801 *********** 2025-06-02 13:24:18.790988 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.790993 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.790999 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791004 | orchestrator | 2025-06-02 13:24:18.791009 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-02 13:24:18.791015 | orchestrator | Monday 02 June 2025 13:17:22 +0000 (0:00:00.825) 0:03:35.626 *********** 2025-06-02 13:24:18.791020 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791025 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791031 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791036 | orchestrator | 2025-06-02 13:24:18.791042 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-02 13:24:18.791047 | orchestrator | Monday 02 June 2025 13:17:23 +0000 (0:00:00.610) 0:03:36.237 *********** 2025-06-02 13:24:18.791052 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.791058 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.791063 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.791078 | orchestrator | 2025-06-02 13:24:18.791084 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-02 13:24:18.791089 | orchestrator | Monday 02 June 2025 13:17:24 +0000 (0:00:00.712) 0:03:36.949 *********** 2025-06-02 13:24:18.791095 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791100 | orchestrator | 2025-06-02 13:24:18.791106 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-02 13:24:18.791111 | orchestrator | Monday 02 June 2025 13:17:25 +0000 (0:00:01.210) 0:03:38.160 *********** 2025-06-02 13:24:18.791116 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.791122 | orchestrator | 2025-06-02 13:24:18.791127 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-02 13:24:18.791135 | orchestrator | Monday 02 June 2025 13:17:26 +0000 (0:00:00.696) 0:03:38.856 *********** 2025-06-02 13:24:18.791141 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:24:18.791146 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.791151 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.791157 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:24:18.791162 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-02 13:24:18.791167 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:24:18.791173 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:24:18.791178 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-02 13:24:18.791184 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:24:18.791189 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-02 13:24:18.791194 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-02 13:24:18.791200 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-02 13:24:18.791205 | orchestrator | 2025-06-02 13:24:18.791210 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-02 13:24:18.791216 | orchestrator | Monday 02 June 2025 13:17:29 +0000 (0:00:03.324) 0:03:42.181 *********** 2025-06-02 13:24:18.791221 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791226 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791263 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791268 | orchestrator | 2025-06-02 13:24:18.791274 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-02 13:24:18.791279 | orchestrator | Monday 02 June 2025 13:17:31 +0000 (0:00:01.653) 0:03:43.834 *********** 2025-06-02 13:24:18.791284 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.791290 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.791295 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.791301 | orchestrator | 2025-06-02 13:24:18.791306 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-02 13:24:18.791312 | orchestrator | Monday 02 June 2025 13:17:31 +0000 (0:00:00.269) 0:03:44.103 *********** 2025-06-02 13:24:18.791317 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.791322 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.791328 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.791333 | orchestrator | 2025-06-02 13:24:18.791338 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-02 13:24:18.791344 | orchestrator | Monday 02 June 2025 13:17:31 +0000 (0:00:00.269) 0:03:44.373 *********** 2025-06-02 13:24:18.791349 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791354 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791360 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791365 | orchestrator | 2025-06-02 13:24:18.791371 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-02 13:24:18.791393 | orchestrator | Monday 02 June 2025 13:17:33 +0000 (0:00:01.609) 0:03:45.983 *********** 2025-06-02 13:24:18.791399 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791405 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791410 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791415 | orchestrator | 2025-06-02 13:24:18.791421 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-02 13:24:18.791426 | orchestrator | Monday 02 June 2025 13:17:34 +0000 (0:00:01.565) 0:03:47.548 *********** 2025-06-02 13:24:18.791431 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.791437 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.791442 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.791447 | orchestrator | 2025-06-02 13:24:18.791452 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-02 13:24:18.791458 | orchestrator | Monday 02 June 2025 13:17:35 +0000 (0:00:00.225) 0:03:47.773 *********** 2025-06-02 13:24:18.791463 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.791469 | orchestrator | 2025-06-02 13:24:18.791474 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-02 13:24:18.791479 | orchestrator | Monday 02 June 2025 13:17:35 +0000 (0:00:00.400) 0:03:48.174 *********** 2025-06-02 13:24:18.791484 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.791490 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.791495 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.791500 | orchestrator | 2025-06-02 13:24:18.791506 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-02 13:24:18.791511 | orchestrator | Monday 02 June 2025 13:17:35 +0000 (0:00:00.349) 0:03:48.523 *********** 2025-06-02 13:24:18.791516 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.791522 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.791527 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.791532 | orchestrator | 2025-06-02 13:24:18.791537 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-02 13:24:18.791543 | orchestrator | Monday 02 June 2025 13:17:36 +0000 (0:00:00.250) 0:03:48.774 *********** 2025-06-02 13:24:18.791548 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.791554 | orchestrator | 2025-06-02 13:24:18.791559 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-02 13:24:18.791567 | orchestrator | Monday 02 June 2025 13:17:36 +0000 (0:00:00.422) 0:03:49.196 *********** 2025-06-02 13:24:18.791573 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791578 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791584 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791589 | orchestrator | 2025-06-02 13:24:18.791594 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-02 13:24:18.791599 | orchestrator | Monday 02 June 2025 13:17:38 +0000 (0:00:01.742) 0:03:50.939 *********** 2025-06-02 13:24:18.791605 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791610 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791618 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791623 | orchestrator | 2025-06-02 13:24:18.791629 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-02 13:24:18.791634 | orchestrator | Monday 02 June 2025 13:17:39 +0000 (0:00:01.135) 0:03:52.074 *********** 2025-06-02 13:24:18.791639 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791645 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791650 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791655 | orchestrator | 2025-06-02 13:24:18.791660 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-02 13:24:18.791666 | orchestrator | Monday 02 June 2025 13:17:41 +0000 (0:00:01.669) 0:03:53.744 *********** 2025-06-02 13:24:18.791671 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.791676 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.791682 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.791687 | orchestrator | 2025-06-02 13:24:18.791692 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-02 13:24:18.791698 | orchestrator | Monday 02 June 2025 13:17:42 +0000 (0:00:01.883) 0:03:55.627 *********** 2025-06-02 13:24:18.791703 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.791708 | orchestrator | 2025-06-02 13:24:18.791714 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-02 13:24:18.791719 | orchestrator | Monday 02 June 2025 13:17:43 +0000 (0:00:00.635) 0:03:56.263 *********** 2025-06-02 13:24:18.791724 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-02 13:24:18.791729 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.791735 | orchestrator | 2025-06-02 13:24:18.791740 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-02 13:24:18.791745 | orchestrator | Monday 02 June 2025 13:18:05 +0000 (0:00:21.823) 0:04:18.086 *********** 2025-06-02 13:24:18.791751 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.791756 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.791761 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.791767 | orchestrator | 2025-06-02 13:24:18.791772 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-02 13:24:18.791778 | orchestrator | Monday 02 June 2025 13:18:15 +0000 (0:00:10.136) 0:04:28.223 *********** 2025-06-02 13:24:18.791783 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.791788 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.791793 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.791799 | orchestrator | 2025-06-02 13:24:18.791804 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-02 13:24:18.791809 | orchestrator | Monday 02 June 2025 13:18:15 +0000 (0:00:00.331) 0:04:28.554 *********** 2025-06-02 13:24:18.791830 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__25c72c882e0b03544b644d144c864f324b6d9866'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-02 13:24:18.791841 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__25c72c882e0b03544b644d144c864f324b6d9866'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-02 13:24:18.791847 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__25c72c882e0b03544b644d144c864f324b6d9866'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-02 13:24:18.791853 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__25c72c882e0b03544b644d144c864f324b6d9866'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-02 13:24:18.791859 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__25c72c882e0b03544b644d144c864f324b6d9866'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-02 13:24:18.791867 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__25c72c882e0b03544b644d144c864f324b6d9866'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__25c72c882e0b03544b644d144c864f324b6d9866'}])  2025-06-02 13:24:18.791874 | orchestrator | 2025-06-02 13:24:18.791879 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 13:24:18.791884 | orchestrator | Monday 02 June 2025 13:18:29 +0000 (0:00:14.146) 0:04:42.701 *********** 2025-06-02 13:24:18.791890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.791895 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.791900 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.791906 | orchestrator | 2025-06-02 13:24:18.791911 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 13:24:18.791916 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:00.298) 0:04:42.999 *********** 2025-06-02 13:24:18.791921 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.791927 | orchestrator | 2025-06-02 13:24:18.791932 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 13:24:18.791937 | orchestrator | Monday 02 June 2025 13:18:30 +0000 (0:00:00.579) 0:04:43.579 *********** 2025-06-02 13:24:18.791943 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.791948 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.791953 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.791959 | orchestrator | 2025-06-02 13:24:18.791964 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 13:24:18.791969 | orchestrator | Monday 02 June 2025 13:18:31 +0000 (0:00:00.315) 0:04:43.895 *********** 2025-06-02 13:24:18.791975 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.791980 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.791985 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.791990 | orchestrator | 2025-06-02 13:24:18.791996 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 13:24:18.792001 | orchestrator | Monday 02 June 2025 13:18:31 +0000 (0:00:00.349) 0:04:44.244 *********** 2025-06-02 13:24:18.792009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 13:24:18.792015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 13:24:18.792020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 13:24:18.792025 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792031 | orchestrator | 2025-06-02 13:24:18.792036 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 13:24:18.792041 | orchestrator | Monday 02 June 2025 13:18:32 +0000 (0:00:00.698) 0:04:44.942 *********** 2025-06-02 13:24:18.792047 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792052 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792057 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792063 | orchestrator | 2025-06-02 13:24:18.792077 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-02 13:24:18.792083 | orchestrator | 2025-06-02 13:24:18.792088 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 13:24:18.792109 | orchestrator | Monday 02 June 2025 13:18:32 +0000 (0:00:00.626) 0:04:45.569 *********** 2025-06-02 13:24:18.792115 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.792121 | orchestrator | 2025-06-02 13:24:18.792126 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 13:24:18.792131 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.458) 0:04:46.027 *********** 2025-06-02 13:24:18.792137 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.792142 | orchestrator | 2025-06-02 13:24:18.792148 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 13:24:18.792153 | orchestrator | Monday 02 June 2025 13:18:33 +0000 (0:00:00.522) 0:04:46.550 *********** 2025-06-02 13:24:18.792158 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792164 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792169 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792174 | orchestrator | 2025-06-02 13:24:18.792180 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 13:24:18.792185 | orchestrator | Monday 02 June 2025 13:18:34 +0000 (0:00:00.586) 0:04:47.136 *********** 2025-06-02 13:24:18.792190 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792196 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792201 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792206 | orchestrator | 2025-06-02 13:24:18.792212 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 13:24:18.792217 | orchestrator | Monday 02 June 2025 13:18:34 +0000 (0:00:00.276) 0:04:47.412 *********** 2025-06-02 13:24:18.792222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792228 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792233 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792238 | orchestrator | 2025-06-02 13:24:18.792244 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 13:24:18.792249 | orchestrator | Monday 02 June 2025 13:18:35 +0000 (0:00:00.410) 0:04:47.823 *********** 2025-06-02 13:24:18.792254 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792260 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792265 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792271 | orchestrator | 2025-06-02 13:24:18.792276 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 13:24:18.792281 | orchestrator | Monday 02 June 2025 13:18:35 +0000 (0:00:00.257) 0:04:48.080 *********** 2025-06-02 13:24:18.792287 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792292 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792297 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792303 | orchestrator | 2025-06-02 13:24:18.792308 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 13:24:18.792317 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.669) 0:04:48.750 *********** 2025-06-02 13:24:18.792323 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792346 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792352 | orchestrator | 2025-06-02 13:24:18.792357 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 13:24:18.792363 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.276) 0:04:49.026 *********** 2025-06-02 13:24:18.792368 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792373 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792379 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792384 | orchestrator | 2025-06-02 13:24:18.792389 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 13:24:18.792395 | orchestrator | Monday 02 June 2025 13:18:36 +0000 (0:00:00.505) 0:04:49.532 *********** 2025-06-02 13:24:18.792400 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792405 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792411 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792416 | orchestrator | 2025-06-02 13:24:18.792421 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 13:24:18.792427 | orchestrator | Monday 02 June 2025 13:18:37 +0000 (0:00:00.695) 0:04:50.227 *********** 2025-06-02 13:24:18.792432 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792437 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792443 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792448 | orchestrator | 2025-06-02 13:24:18.792454 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 13:24:18.792459 | orchestrator | Monday 02 June 2025 13:18:38 +0000 (0:00:00.664) 0:04:50.892 *********** 2025-06-02 13:24:18.792464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792470 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792475 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792480 | orchestrator | 2025-06-02 13:24:18.792486 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 13:24:18.792491 | orchestrator | Monday 02 June 2025 13:18:38 +0000 (0:00:00.285) 0:04:51.178 *********** 2025-06-02 13:24:18.792496 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792502 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792507 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792512 | orchestrator | 2025-06-02 13:24:18.792518 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 13:24:18.792523 | orchestrator | Monday 02 June 2025 13:18:39 +0000 (0:00:00.544) 0:04:51.722 *********** 2025-06-02 13:24:18.792528 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792534 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792539 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792544 | orchestrator | 2025-06-02 13:24:18.792550 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 13:24:18.792555 | orchestrator | Monday 02 June 2025 13:18:39 +0000 (0:00:00.339) 0:04:52.062 *********** 2025-06-02 13:24:18.792560 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792566 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792571 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792577 | orchestrator | 2025-06-02 13:24:18.792597 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 13:24:18.792604 | orchestrator | Monday 02 June 2025 13:18:39 +0000 (0:00:00.283) 0:04:52.345 *********** 2025-06-02 13:24:18.792609 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792614 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792620 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792625 | orchestrator | 2025-06-02 13:24:18.792630 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 13:24:18.792636 | orchestrator | Monday 02 June 2025 13:18:39 +0000 (0:00:00.269) 0:04:52.615 *********** 2025-06-02 13:24:18.792644 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792650 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792655 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792660 | orchestrator | 2025-06-02 13:24:18.792666 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 13:24:18.792671 | orchestrator | Monday 02 June 2025 13:18:40 +0000 (0:00:00.537) 0:04:53.153 *********** 2025-06-02 13:24:18.792676 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792682 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792687 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792692 | orchestrator | 2025-06-02 13:24:18.792697 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 13:24:18.792703 | orchestrator | Monday 02 June 2025 13:18:40 +0000 (0:00:00.324) 0:04:53.478 *********** 2025-06-02 13:24:18.792708 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792713 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792719 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792724 | orchestrator | 2025-06-02 13:24:18.792729 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 13:24:18.792735 | orchestrator | Monday 02 June 2025 13:18:41 +0000 (0:00:00.377) 0:04:53.855 *********** 2025-06-02 13:24:18.792740 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792745 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792751 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792756 | orchestrator | 2025-06-02 13:24:18.792761 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 13:24:18.792767 | orchestrator | Monday 02 June 2025 13:18:41 +0000 (0:00:00.295) 0:04:54.151 *********** 2025-06-02 13:24:18.792772 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792778 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792783 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792788 | orchestrator | 2025-06-02 13:24:18.792794 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-02 13:24:18.792799 | orchestrator | Monday 02 June 2025 13:18:42 +0000 (0:00:00.633) 0:04:54.784 *********** 2025-06-02 13:24:18.792804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:24:18.792810 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:24:18.792815 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:24:18.792821 | orchestrator | 2025-06-02 13:24:18.792826 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-02 13:24:18.792834 | orchestrator | Monday 02 June 2025 13:18:42 +0000 (0:00:00.472) 0:04:55.257 *********** 2025-06-02 13:24:18.792839 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.792845 | orchestrator | 2025-06-02 13:24:18.792850 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-02 13:24:18.792855 | orchestrator | Monday 02 June 2025 13:18:43 +0000 (0:00:00.474) 0:04:55.732 *********** 2025-06-02 13:24:18.792861 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.792866 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.792871 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.792876 | orchestrator | 2025-06-02 13:24:18.792882 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-02 13:24:18.792887 | orchestrator | Monday 02 June 2025 13:18:43 +0000 (0:00:00.820) 0:04:56.552 *********** 2025-06-02 13:24:18.792892 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.792898 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.792903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.792908 | orchestrator | 2025-06-02 13:24:18.792914 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-02 13:24:18.792919 | orchestrator | Monday 02 June 2025 13:18:44 +0000 (0:00:00.265) 0:04:56.818 *********** 2025-06-02 13:24:18.792928 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:24:18.792933 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:24:18.792939 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:24:18.792944 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-02 13:24:18.792949 | orchestrator | 2025-06-02 13:24:18.792955 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-02 13:24:18.792960 | orchestrator | Monday 02 June 2025 13:18:56 +0000 (0:00:12.469) 0:05:09.287 *********** 2025-06-02 13:24:18.792965 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.792971 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.792976 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.792981 | orchestrator | 2025-06-02 13:24:18.792987 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-02 13:24:18.792992 | orchestrator | Monday 02 June 2025 13:18:56 +0000 (0:00:00.332) 0:05:09.619 *********** 2025-06-02 13:24:18.792997 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 13:24:18.793003 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 13:24:18.793008 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 13:24:18.793013 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 13:24:18.793019 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.793024 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.793029 | orchestrator | 2025-06-02 13:24:18.793035 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-02 13:24:18.793054 | orchestrator | Monday 02 June 2025 13:18:59 +0000 (0:00:02.156) 0:05:11.776 *********** 2025-06-02 13:24:18.793061 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 13:24:18.793066 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 13:24:18.793098 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 13:24:18.793104 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:24:18.793109 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 13:24:18.793114 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 13:24:18.793120 | orchestrator | 2025-06-02 13:24:18.793125 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-02 13:24:18.793131 | orchestrator | Monday 02 June 2025 13:19:00 +0000 (0:00:01.630) 0:05:13.406 *********** 2025-06-02 13:24:18.793136 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.793142 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.793147 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.793152 | orchestrator | 2025-06-02 13:24:18.793158 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-02 13:24:18.793163 | orchestrator | Monday 02 June 2025 13:19:01 +0000 (0:00:00.622) 0:05:14.029 *********** 2025-06-02 13:24:18.793168 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.793174 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.793178 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.793183 | orchestrator | 2025-06-02 13:24:18.793188 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-02 13:24:18.793193 | orchestrator | Monday 02 June 2025 13:19:01 +0000 (0:00:00.285) 0:05:14.314 *********** 2025-06-02 13:24:18.793197 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.793202 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.793207 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.793212 | orchestrator | 2025-06-02 13:24:18.793216 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-02 13:24:18.793221 | orchestrator | Monday 02 June 2025 13:19:01 +0000 (0:00:00.293) 0:05:14.607 *********** 2025-06-02 13:24:18.793226 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.793231 | orchestrator | 2025-06-02 13:24:18.793236 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-02 13:24:18.793244 | orchestrator | Monday 02 June 2025 13:19:02 +0000 (0:00:00.753) 0:05:15.361 *********** 2025-06-02 13:24:18.793249 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.793254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.793258 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.793263 | orchestrator | 2025-06-02 13:24:18.793268 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-02 13:24:18.793273 | orchestrator | Monday 02 June 2025 13:19:02 +0000 (0:00:00.307) 0:05:15.668 *********** 2025-06-02 13:24:18.793277 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.793282 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.793287 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.793292 | orchestrator | 2025-06-02 13:24:18.793296 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-02 13:24:18.793303 | orchestrator | Monday 02 June 2025 13:19:03 +0000 (0:00:00.336) 0:05:16.005 *********** 2025-06-02 13:24:18.793308 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.793313 | orchestrator | 2025-06-02 13:24:18.793318 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-02 13:24:18.793323 | orchestrator | Monday 02 June 2025 13:19:04 +0000 (0:00:00.789) 0:05:16.794 *********** 2025-06-02 13:24:18.793328 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.793332 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.793337 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.793342 | orchestrator | 2025-06-02 13:24:18.793347 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-02 13:24:18.793351 | orchestrator | Monday 02 June 2025 13:19:05 +0000 (0:00:01.256) 0:05:18.050 *********** 2025-06-02 13:24:18.793356 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.793361 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.793365 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.793370 | orchestrator | 2025-06-02 13:24:18.793375 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-02 13:24:18.793380 | orchestrator | Monday 02 June 2025 13:19:06 +0000 (0:00:01.145) 0:05:19.196 *********** 2025-06-02 13:24:18.793384 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.793389 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.793394 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.793399 | orchestrator | 2025-06-02 13:24:18.793403 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-02 13:24:18.793408 | orchestrator | Monday 02 June 2025 13:19:08 +0000 (0:00:02.112) 0:05:21.308 *********** 2025-06-02 13:24:18.793413 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.793417 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.793422 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.793427 | orchestrator | 2025-06-02 13:24:18.793432 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-02 13:24:18.793436 | orchestrator | Monday 02 June 2025 13:19:10 +0000 (0:00:01.862) 0:05:23.170 *********** 2025-06-02 13:24:18.793441 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.793446 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.793451 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-02 13:24:18.793455 | orchestrator | 2025-06-02 13:24:18.793460 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-02 13:24:18.793465 | orchestrator | Monday 02 June 2025 13:19:10 +0000 (0:00:00.423) 0:05:23.593 *********** 2025-06-02 13:24:18.793470 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-02 13:24:18.793475 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-02 13:24:18.793494 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-02 13:24:18.793503 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-02 13:24:18.793508 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:24:18.793512 | orchestrator | 2025-06-02 13:24:18.793517 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-02 13:24:18.793522 | orchestrator | Monday 02 June 2025 13:19:34 +0000 (0:00:24.035) 0:05:47.629 *********** 2025-06-02 13:24:18.793527 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:24:18.793531 | orchestrator | 2025-06-02 13:24:18.793536 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-02 13:24:18.793541 | orchestrator | Monday 02 June 2025 13:19:36 +0000 (0:00:01.456) 0:05:49.085 *********** 2025-06-02 13:24:18.793546 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.793550 | orchestrator | 2025-06-02 13:24:18.793555 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-02 13:24:18.793560 | orchestrator | Monday 02 June 2025 13:19:37 +0000 (0:00:00.896) 0:05:49.981 *********** 2025-06-02 13:24:18.793564 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.793569 | orchestrator | 2025-06-02 13:24:18.793574 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-02 13:24:18.793579 | orchestrator | Monday 02 June 2025 13:19:37 +0000 (0:00:00.143) 0:05:50.125 *********** 2025-06-02 13:24:18.793583 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-02 13:24:18.793588 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-02 13:24:18.793593 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-02 13:24:18.793597 | orchestrator | 2025-06-02 13:24:18.793602 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-02 13:24:18.793607 | orchestrator | Monday 02 June 2025 13:19:43 +0000 (0:00:06.501) 0:05:56.626 *********** 2025-06-02 13:24:18.793611 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-02 13:24:18.793616 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-02 13:24:18.793621 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-02 13:24:18.793625 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-02 13:24:18.793630 | orchestrator | 2025-06-02 13:24:18.793635 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 13:24:18.793640 | orchestrator | Monday 02 June 2025 13:19:48 +0000 (0:00:04.543) 0:06:01.169 *********** 2025-06-02 13:24:18.793644 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.793649 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.793654 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.793658 | orchestrator | 2025-06-02 13:24:18.793665 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 13:24:18.793670 | orchestrator | Monday 02 June 2025 13:19:49 +0000 (0:00:00.940) 0:06:02.109 *********** 2025-06-02 13:24:18.793675 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:24:18.793680 | orchestrator | 2025-06-02 13:24:18.793685 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 13:24:18.793689 | orchestrator | Monday 02 June 2025 13:19:49 +0000 (0:00:00.498) 0:06:02.607 *********** 2025-06-02 13:24:18.793694 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.793699 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.793703 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.793708 | orchestrator | 2025-06-02 13:24:18.793713 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 13:24:18.793718 | orchestrator | Monday 02 June 2025 13:19:50 +0000 (0:00:00.303) 0:06:02.911 *********** 2025-06-02 13:24:18.793725 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.793730 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.793735 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.793739 | orchestrator | 2025-06-02 13:24:18.793744 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 13:24:18.793749 | orchestrator | Monday 02 June 2025 13:19:51 +0000 (0:00:01.733) 0:06:04.645 *********** 2025-06-02 13:24:18.793753 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 13:24:18.793758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 13:24:18.793763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 13:24:18.793767 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.793772 | orchestrator | 2025-06-02 13:24:18.793777 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 13:24:18.793782 | orchestrator | Monday 02 June 2025 13:19:52 +0000 (0:00:00.577) 0:06:05.223 *********** 2025-06-02 13:24:18.793786 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.793791 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.793796 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.793800 | orchestrator | 2025-06-02 13:24:18.793805 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-02 13:24:18.793810 | orchestrator | 2025-06-02 13:24:18.793815 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 13:24:18.793819 | orchestrator | Monday 02 June 2025 13:19:52 +0000 (0:00:00.476) 0:06:05.699 *********** 2025-06-02 13:24:18.793824 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.793829 | orchestrator | 2025-06-02 13:24:18.793834 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 13:24:18.793838 | orchestrator | Monday 02 June 2025 13:19:53 +0000 (0:00:00.563) 0:06:06.263 *********** 2025-06-02 13:24:18.793857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.793862 | orchestrator | 2025-06-02 13:24:18.793867 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 13:24:18.793872 | orchestrator | Monday 02 June 2025 13:19:54 +0000 (0:00:00.443) 0:06:06.707 *********** 2025-06-02 13:24:18.793877 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.793881 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.793886 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.793891 | orchestrator | 2025-06-02 13:24:18.793896 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 13:24:18.793901 | orchestrator | Monday 02 June 2025 13:19:54 +0000 (0:00:00.244) 0:06:06.951 *********** 2025-06-02 13:24:18.793905 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.793910 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.793915 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.793919 | orchestrator | 2025-06-02 13:24:18.793924 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 13:24:18.793929 | orchestrator | Monday 02 June 2025 13:19:55 +0000 (0:00:00.785) 0:06:07.736 *********** 2025-06-02 13:24:18.793934 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.793939 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.793943 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.793948 | orchestrator | 2025-06-02 13:24:18.793953 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 13:24:18.793958 | orchestrator | Monday 02 June 2025 13:19:55 +0000 (0:00:00.592) 0:06:08.329 *********** 2025-06-02 13:24:18.793962 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.793967 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.793972 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.793976 | orchestrator | 2025-06-02 13:24:18.793981 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 13:24:18.793989 | orchestrator | Monday 02 June 2025 13:19:56 +0000 (0:00:00.627) 0:06:08.957 *********** 2025-06-02 13:24:18.793994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.793999 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794003 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794008 | orchestrator | 2025-06-02 13:24:18.794025 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 13:24:18.794031 | orchestrator | Monday 02 June 2025 13:19:56 +0000 (0:00:00.283) 0:06:09.240 *********** 2025-06-02 13:24:18.794036 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794041 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794046 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794050 | orchestrator | 2025-06-02 13:24:18.794055 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 13:24:18.794060 | orchestrator | Monday 02 June 2025 13:19:57 +0000 (0:00:00.686) 0:06:09.927 *********** 2025-06-02 13:24:18.794065 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794078 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794083 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794087 | orchestrator | 2025-06-02 13:24:18.794092 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 13:24:18.794100 | orchestrator | Monday 02 June 2025 13:19:57 +0000 (0:00:00.346) 0:06:10.273 *********** 2025-06-02 13:24:18.794105 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794110 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794114 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794119 | orchestrator | 2025-06-02 13:24:18.794124 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 13:24:18.794129 | orchestrator | Monday 02 June 2025 13:19:58 +0000 (0:00:00.672) 0:06:10.946 *********** 2025-06-02 13:24:18.794134 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794139 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794143 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794148 | orchestrator | 2025-06-02 13:24:18.794153 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 13:24:18.794158 | orchestrator | Monday 02 June 2025 13:19:58 +0000 (0:00:00.672) 0:06:11.619 *********** 2025-06-02 13:24:18.794163 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794167 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794172 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794177 | orchestrator | 2025-06-02 13:24:18.794182 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 13:24:18.794187 | orchestrator | Monday 02 June 2025 13:19:59 +0000 (0:00:00.551) 0:06:12.170 *********** 2025-06-02 13:24:18.794191 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794196 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794201 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794206 | orchestrator | 2025-06-02 13:24:18.794211 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 13:24:18.794215 | orchestrator | Monday 02 June 2025 13:19:59 +0000 (0:00:00.326) 0:06:12.497 *********** 2025-06-02 13:24:18.794220 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794225 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794230 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794234 | orchestrator | 2025-06-02 13:24:18.794239 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 13:24:18.794244 | orchestrator | Monday 02 June 2025 13:20:00 +0000 (0:00:00.305) 0:06:12.802 *********** 2025-06-02 13:24:18.794249 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794253 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794258 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794263 | orchestrator | 2025-06-02 13:24:18.794268 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 13:24:18.794273 | orchestrator | Monday 02 June 2025 13:20:00 +0000 (0:00:00.331) 0:06:13.133 *********** 2025-06-02 13:24:18.794281 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794286 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794291 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794295 | orchestrator | 2025-06-02 13:24:18.794300 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 13:24:18.794305 | orchestrator | Monday 02 June 2025 13:20:01 +0000 (0:00:00.698) 0:06:13.832 *********** 2025-06-02 13:24:18.794310 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794315 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794319 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794324 | orchestrator | 2025-06-02 13:24:18.794331 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 13:24:18.794336 | orchestrator | Monday 02 June 2025 13:20:01 +0000 (0:00:00.370) 0:06:14.203 *********** 2025-06-02 13:24:18.794341 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794346 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794351 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794356 | orchestrator | 2025-06-02 13:24:18.794360 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 13:24:18.794365 | orchestrator | Monday 02 June 2025 13:20:01 +0000 (0:00:00.311) 0:06:14.514 *********** 2025-06-02 13:24:18.794370 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794375 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794380 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794384 | orchestrator | 2025-06-02 13:24:18.794389 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 13:24:18.794394 | orchestrator | Monday 02 June 2025 13:20:02 +0000 (0:00:00.315) 0:06:14.830 *********** 2025-06-02 13:24:18.794399 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794403 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794408 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794413 | orchestrator | 2025-06-02 13:24:18.794418 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 13:24:18.794422 | orchestrator | Monday 02 June 2025 13:20:02 +0000 (0:00:00.671) 0:06:15.501 *********** 2025-06-02 13:24:18.794427 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794432 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794437 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794442 | orchestrator | 2025-06-02 13:24:18.794446 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-02 13:24:18.794451 | orchestrator | Monday 02 June 2025 13:20:03 +0000 (0:00:00.685) 0:06:16.187 *********** 2025-06-02 13:24:18.794456 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794461 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794465 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794470 | orchestrator | 2025-06-02 13:24:18.794475 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-02 13:24:18.794480 | orchestrator | Monday 02 June 2025 13:20:03 +0000 (0:00:00.333) 0:06:16.521 *********** 2025-06-02 13:24:18.794484 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 13:24:18.794489 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:24:18.794494 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:24:18.794499 | orchestrator | 2025-06-02 13:24:18.794503 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-02 13:24:18.794508 | orchestrator | Monday 02 June 2025 13:20:04 +0000 (0:00:01.054) 0:06:17.576 *********** 2025-06-02 13:24:18.794513 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.794518 | orchestrator | 2025-06-02 13:24:18.794525 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-02 13:24:18.794529 | orchestrator | Monday 02 June 2025 13:20:05 +0000 (0:00:00.810) 0:06:18.386 *********** 2025-06-02 13:24:18.794537 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794542 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794547 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794551 | orchestrator | 2025-06-02 13:24:18.794556 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-02 13:24:18.794561 | orchestrator | Monday 02 June 2025 13:20:05 +0000 (0:00:00.319) 0:06:18.705 *********** 2025-06-02 13:24:18.794566 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794570 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794575 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794580 | orchestrator | 2025-06-02 13:24:18.794585 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-02 13:24:18.794589 | orchestrator | Monday 02 June 2025 13:20:06 +0000 (0:00:00.316) 0:06:19.022 *********** 2025-06-02 13:24:18.794594 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794599 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794604 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794608 | orchestrator | 2025-06-02 13:24:18.794613 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-02 13:24:18.794618 | orchestrator | Monday 02 June 2025 13:20:07 +0000 (0:00:01.001) 0:06:20.024 *********** 2025-06-02 13:24:18.794623 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.794628 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.794632 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.794637 | orchestrator | 2025-06-02 13:24:18.794642 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-02 13:24:18.794646 | orchestrator | Monday 02 June 2025 13:20:07 +0000 (0:00:00.368) 0:06:20.392 *********** 2025-06-02 13:24:18.794651 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 13:24:18.794656 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 13:24:18.794661 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 13:24:18.794666 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 13:24:18.794670 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 13:24:18.794675 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 13:24:18.794680 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 13:24:18.794685 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 13:24:18.794692 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 13:24:18.794697 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 13:24:18.794702 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 13:24:18.794707 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 13:24:18.794712 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 13:24:18.794716 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 13:24:18.794721 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 13:24:18.794726 | orchestrator | 2025-06-02 13:24:18.794731 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-02 13:24:18.794735 | orchestrator | Monday 02 June 2025 13:20:10 +0000 (0:00:03.210) 0:06:23.603 *********** 2025-06-02 13:24:18.794740 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.794745 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.794750 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.794754 | orchestrator | 2025-06-02 13:24:18.794762 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-02 13:24:18.794767 | orchestrator | Monday 02 June 2025 13:20:11 +0000 (0:00:00.325) 0:06:23.928 *********** 2025-06-02 13:24:18.794771 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.794776 | orchestrator | 2025-06-02 13:24:18.794781 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-02 13:24:18.794786 | orchestrator | Monday 02 June 2025 13:20:12 +0000 (0:00:00.858) 0:06:24.787 *********** 2025-06-02 13:24:18.794791 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 13:24:18.794795 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 13:24:18.794800 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 13:24:18.794805 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-02 13:24:18.794810 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-02 13:24:18.794814 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-02 13:24:18.794819 | orchestrator | 2025-06-02 13:24:18.794824 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-02 13:24:18.794829 | orchestrator | Monday 02 June 2025 13:20:13 +0000 (0:00:00.990) 0:06:25.778 *********** 2025-06-02 13:24:18.794834 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.794838 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 13:24:18.794855 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 13:24:18.794860 | orchestrator | 2025-06-02 13:24:18.794865 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-02 13:24:18.794870 | orchestrator | Monday 02 June 2025 13:20:14 +0000 (0:00:01.907) 0:06:27.685 *********** 2025-06-02 13:24:18.794875 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 13:24:18.794879 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 13:24:18.794884 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.794889 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 13:24:18.794893 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 13:24:18.794898 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.794903 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 13:24:18.794908 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 13:24:18.794912 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.794917 | orchestrator | 2025-06-02 13:24:18.794922 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-02 13:24:18.794927 | orchestrator | Monday 02 June 2025 13:20:16 +0000 (0:00:01.240) 0:06:28.926 *********** 2025-06-02 13:24:18.794931 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:24:18.794936 | orchestrator | 2025-06-02 13:24:18.794941 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-02 13:24:18.794946 | orchestrator | Monday 02 June 2025 13:20:18 +0000 (0:00:01.955) 0:06:30.881 *********** 2025-06-02 13:24:18.794950 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.794955 | orchestrator | 2025-06-02 13:24:18.794960 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-02 13:24:18.794965 | orchestrator | Monday 02 June 2025 13:20:18 +0000 (0:00:00.543) 0:06:31.424 *********** 2025-06-02 13:24:18.794970 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d6dea29-b52d-558c-8900-475fd450038e', 'data_vg': 'ceph-4d6dea29-b52d-558c-8900-475fd450038e'}) 2025-06-02 13:24:18.794975 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e284bd18-e265-58a5-a2ab-ec21b03cc36c', 'data_vg': 'ceph-e284bd18-e265-58a5-a2ab-ec21b03cc36c'}) 2025-06-02 13:24:18.794980 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16065c32-ca37-5a4d-8ac9-40bfcb225d4e', 'data_vg': 'ceph-16065c32-ca37-5a4d-8ac9-40bfcb225d4e'}) 2025-06-02 13:24:18.794988 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-903578c2-c0cc-5204-b647-273ed346895e', 'data_vg': 'ceph-903578c2-c0cc-5204-b647-273ed346895e'}) 2025-06-02 13:24:18.794996 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb', 'data_vg': 'ceph-8c0a4a87-9c6a-5b65-b86e-eb950bafb2cb'}) 2025-06-02 13:24:18.795001 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4e8c4e16-432b-566e-bc19-b5260bfeea4e', 'data_vg': 'ceph-4e8c4e16-432b-566e-bc19-b5260bfeea4e'}) 2025-06-02 13:24:18.795005 | orchestrator | 2025-06-02 13:24:18.795010 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-02 13:24:18.795015 | orchestrator | Monday 02 June 2025 13:20:58 +0000 (0:00:39.982) 0:07:11.407 *********** 2025-06-02 13:24:18.795020 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795025 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795029 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795034 | orchestrator | 2025-06-02 13:24:18.795039 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-02 13:24:18.795044 | orchestrator | Monday 02 June 2025 13:20:59 +0000 (0:00:00.664) 0:07:12.072 *********** 2025-06-02 13:24:18.795049 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.795053 | orchestrator | 2025-06-02 13:24:18.795058 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-02 13:24:18.795063 | orchestrator | Monday 02 June 2025 13:20:59 +0000 (0:00:00.558) 0:07:12.631 *********** 2025-06-02 13:24:18.795075 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.795080 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.795085 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.795090 | orchestrator | 2025-06-02 13:24:18.795094 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-02 13:24:18.795099 | orchestrator | Monday 02 June 2025 13:21:00 +0000 (0:00:00.634) 0:07:13.265 *********** 2025-06-02 13:24:18.795104 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.795109 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.795114 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.795118 | orchestrator | 2025-06-02 13:24:18.795123 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-02 13:24:18.795128 | orchestrator | Monday 02 June 2025 13:21:03 +0000 (0:00:02.942) 0:07:16.208 *********** 2025-06-02 13:24:18.795133 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.795138 | orchestrator | 2025-06-02 13:24:18.795142 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-02 13:24:18.795147 | orchestrator | Monday 02 June 2025 13:21:04 +0000 (0:00:00.546) 0:07:16.754 *********** 2025-06-02 13:24:18.795152 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.795157 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.795161 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.795166 | orchestrator | 2025-06-02 13:24:18.795171 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-02 13:24:18.795176 | orchestrator | Monday 02 June 2025 13:21:05 +0000 (0:00:01.156) 0:07:17.910 *********** 2025-06-02 13:24:18.795180 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.795188 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.795192 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.795197 | orchestrator | 2025-06-02 13:24:18.795202 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-02 13:24:18.795207 | orchestrator | Monday 02 June 2025 13:21:06 +0000 (0:00:01.372) 0:07:19.283 *********** 2025-06-02 13:24:18.795211 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.795216 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.795224 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.795229 | orchestrator | 2025-06-02 13:24:18.795234 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-02 13:24:18.795238 | orchestrator | Monday 02 June 2025 13:21:08 +0000 (0:00:01.880) 0:07:21.163 *********** 2025-06-02 13:24:18.795243 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795248 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795253 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795257 | orchestrator | 2025-06-02 13:24:18.795262 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-02 13:24:18.795267 | orchestrator | Monday 02 June 2025 13:21:08 +0000 (0:00:00.309) 0:07:21.473 *********** 2025-06-02 13:24:18.795272 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795276 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795281 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795286 | orchestrator | 2025-06-02 13:24:18.795291 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-02 13:24:18.795296 | orchestrator | Monday 02 June 2025 13:21:09 +0000 (0:00:00.306) 0:07:21.780 *********** 2025-06-02 13:24:18.795300 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-02 13:24:18.795305 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-02 13:24:18.795310 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-06-02 13:24:18.795314 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 13:24:18.795319 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-02 13:24:18.795324 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-02 13:24:18.795328 | orchestrator | 2025-06-02 13:24:18.795333 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-02 13:24:18.795338 | orchestrator | Monday 02 June 2025 13:21:10 +0000 (0:00:01.227) 0:07:23.008 *********** 2025-06-02 13:24:18.795343 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 13:24:18.795347 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 13:24:18.795352 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-02 13:24:18.795357 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 13:24:18.795361 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 13:24:18.795366 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 13:24:18.795371 | orchestrator | 2025-06-02 13:24:18.795375 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-02 13:24:18.795380 | orchestrator | Monday 02 June 2025 13:21:12 +0000 (0:00:02.155) 0:07:25.163 *********** 2025-06-02 13:24:18.795385 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-02 13:24:18.795392 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-02 13:24:18.795397 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-02 13:24:18.795402 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 13:24:18.795406 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-02 13:24:18.795411 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-02 13:24:18.795416 | orchestrator | 2025-06-02 13:24:18.795421 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-02 13:24:18.795425 | orchestrator | Monday 02 June 2025 13:21:16 +0000 (0:00:03.672) 0:07:28.836 *********** 2025-06-02 13:24:18.795430 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795435 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795440 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:24:18.795444 | orchestrator | 2025-06-02 13:24:18.795449 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-02 13:24:18.795454 | orchestrator | Monday 02 June 2025 13:21:18 +0000 (0:00:02.743) 0:07:31.579 *********** 2025-06-02 13:24:18.795459 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795463 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795468 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-02 13:24:18.795476 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:24:18.795480 | orchestrator | 2025-06-02 13:24:18.795485 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-02 13:24:18.795490 | orchestrator | Monday 02 June 2025 13:21:31 +0000 (0:00:13.031) 0:07:44.611 *********** 2025-06-02 13:24:18.795495 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795499 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795504 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795509 | orchestrator | 2025-06-02 13:24:18.795513 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 13:24:18.795518 | orchestrator | Monday 02 June 2025 13:21:32 +0000 (0:00:00.841) 0:07:45.452 *********** 2025-06-02 13:24:18.795523 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795528 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795532 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795537 | orchestrator | 2025-06-02 13:24:18.795542 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 13:24:18.795546 | orchestrator | Monday 02 June 2025 13:21:33 +0000 (0:00:00.562) 0:07:46.014 *********** 2025-06-02 13:24:18.795551 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.795556 | orchestrator | 2025-06-02 13:24:18.795561 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 13:24:18.795565 | orchestrator | Monday 02 June 2025 13:21:33 +0000 (0:00:00.528) 0:07:46.543 *********** 2025-06-02 13:24:18.795570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.795575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.795582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.795587 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795591 | orchestrator | 2025-06-02 13:24:18.795596 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 13:24:18.795601 | orchestrator | Monday 02 June 2025 13:21:34 +0000 (0:00:00.395) 0:07:46.938 *********** 2025-06-02 13:24:18.795606 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795610 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795615 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795620 | orchestrator | 2025-06-02 13:24:18.795625 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 13:24:18.795629 | orchestrator | Monday 02 June 2025 13:21:34 +0000 (0:00:00.317) 0:07:47.256 *********** 2025-06-02 13:24:18.795634 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795639 | orchestrator | 2025-06-02 13:24:18.795643 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 13:24:18.795648 | orchestrator | Monday 02 June 2025 13:21:34 +0000 (0:00:00.200) 0:07:47.457 *********** 2025-06-02 13:24:18.795653 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795657 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795662 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795667 | orchestrator | 2025-06-02 13:24:18.795672 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 13:24:18.795676 | orchestrator | Monday 02 June 2025 13:21:35 +0000 (0:00:00.623) 0:07:48.080 *********** 2025-06-02 13:24:18.795681 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795685 | orchestrator | 2025-06-02 13:24:18.795690 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 13:24:18.795695 | orchestrator | Monday 02 June 2025 13:21:35 +0000 (0:00:00.215) 0:07:48.295 *********** 2025-06-02 13:24:18.795700 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795704 | orchestrator | 2025-06-02 13:24:18.795709 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 13:24:18.795714 | orchestrator | Monday 02 June 2025 13:21:35 +0000 (0:00:00.212) 0:07:48.508 *********** 2025-06-02 13:24:18.795721 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795726 | orchestrator | 2025-06-02 13:24:18.795731 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 13:24:18.795735 | orchestrator | Monday 02 June 2025 13:21:35 +0000 (0:00:00.119) 0:07:48.628 *********** 2025-06-02 13:24:18.795740 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795745 | orchestrator | 2025-06-02 13:24:18.795750 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 13:24:18.795754 | orchestrator | Monday 02 June 2025 13:21:36 +0000 (0:00:00.225) 0:07:48.854 *********** 2025-06-02 13:24:18.795759 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795764 | orchestrator | 2025-06-02 13:24:18.795768 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 13:24:18.795773 | orchestrator | Monday 02 June 2025 13:21:36 +0000 (0:00:00.229) 0:07:49.083 *********** 2025-06-02 13:24:18.795780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.795785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.795790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.795795 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795799 | orchestrator | 2025-06-02 13:24:18.795804 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 13:24:18.795809 | orchestrator | Monday 02 June 2025 13:21:36 +0000 (0:00:00.384) 0:07:49.468 *********** 2025-06-02 13:24:18.795814 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795819 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795823 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795828 | orchestrator | 2025-06-02 13:24:18.795833 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 13:24:18.795837 | orchestrator | Monday 02 June 2025 13:21:37 +0000 (0:00:00.429) 0:07:49.898 *********** 2025-06-02 13:24:18.795842 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795847 | orchestrator | 2025-06-02 13:24:18.795852 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 13:24:18.795856 | orchestrator | Monday 02 June 2025 13:21:38 +0000 (0:00:01.029) 0:07:50.928 *********** 2025-06-02 13:24:18.795861 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795866 | orchestrator | 2025-06-02 13:24:18.795871 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-02 13:24:18.795875 | orchestrator | 2025-06-02 13:24:18.795880 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 13:24:18.795885 | orchestrator | Monday 02 June 2025 13:21:38 +0000 (0:00:00.648) 0:07:51.576 *********** 2025-06-02 13:24:18.795890 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.795895 | orchestrator | 2025-06-02 13:24:18.795899 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 13:24:18.795904 | orchestrator | Monday 02 June 2025 13:21:40 +0000 (0:00:01.156) 0:07:52.733 *********** 2025-06-02 13:24:18.795909 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.795914 | orchestrator | 2025-06-02 13:24:18.795918 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 13:24:18.795923 | orchestrator | Monday 02 June 2025 13:21:41 +0000 (0:00:01.301) 0:07:54.034 *********** 2025-06-02 13:24:18.795928 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.795933 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.795937 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.795942 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.795947 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.795952 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.795959 | orchestrator | 2025-06-02 13:24:18.795964 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 13:24:18.795969 | orchestrator | Monday 02 June 2025 13:21:42 +0000 (0:00:00.861) 0:07:54.896 *********** 2025-06-02 13:24:18.795973 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.795978 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.795983 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.795988 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.795992 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.795997 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796002 | orchestrator | 2025-06-02 13:24:18.796007 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 13:24:18.796011 | orchestrator | Monday 02 June 2025 13:21:43 +0000 (0:00:00.970) 0:07:55.866 *********** 2025-06-02 13:24:18.796016 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796021 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796026 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796030 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796035 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796040 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796045 | orchestrator | 2025-06-02 13:24:18.796049 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 13:24:18.796054 | orchestrator | Monday 02 June 2025 13:21:44 +0000 (0:00:01.328) 0:07:57.195 *********** 2025-06-02 13:24:18.796059 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796087 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796093 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796098 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796103 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796107 | orchestrator | 2025-06-02 13:24:18.796112 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 13:24:18.796117 | orchestrator | Monday 02 June 2025 13:21:45 +0000 (0:00:01.100) 0:07:58.296 *********** 2025-06-02 13:24:18.796121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796126 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796131 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796136 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.796140 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.796145 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796150 | orchestrator | 2025-06-02 13:24:18.796154 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 13:24:18.796159 | orchestrator | Monday 02 June 2025 13:21:46 +0000 (0:00:00.966) 0:07:59.262 *********** 2025-06-02 13:24:18.796164 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796169 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796173 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796178 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796183 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796187 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796192 | orchestrator | 2025-06-02 13:24:18.796197 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 13:24:18.796202 | orchestrator | Monday 02 June 2025 13:21:47 +0000 (0:00:00.676) 0:07:59.939 *********** 2025-06-02 13:24:18.796209 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796214 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796219 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796223 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796228 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796233 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796237 | orchestrator | 2025-06-02 13:24:18.796242 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 13:24:18.796247 | orchestrator | Monday 02 June 2025 13:21:48 +0000 (0:00:00.833) 0:08:00.772 *********** 2025-06-02 13:24:18.796255 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796259 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.796264 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.796269 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796274 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796278 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796283 | orchestrator | 2025-06-02 13:24:18.796288 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 13:24:18.796293 | orchestrator | Monday 02 June 2025 13:21:49 +0000 (0:00:01.053) 0:08:01.825 *********** 2025-06-02 13:24:18.796297 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796302 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.796307 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.796311 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796316 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796321 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796325 | orchestrator | 2025-06-02 13:24:18.796330 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 13:24:18.796335 | orchestrator | Monday 02 June 2025 13:21:50 +0000 (0:00:01.225) 0:08:03.051 *********** 2025-06-02 13:24:18.796340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796354 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796359 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796363 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796368 | orchestrator | 2025-06-02 13:24:18.796373 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 13:24:18.796378 | orchestrator | Monday 02 June 2025 13:21:50 +0000 (0:00:00.602) 0:08:03.653 *********** 2025-06-02 13:24:18.796382 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796387 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.796392 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.796396 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796401 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796406 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796411 | orchestrator | 2025-06-02 13:24:18.796415 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 13:24:18.796420 | orchestrator | Monday 02 June 2025 13:21:51 +0000 (0:00:00.888) 0:08:04.542 *********** 2025-06-02 13:24:18.796425 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796430 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796434 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796439 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796444 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796450 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796455 | orchestrator | 2025-06-02 13:24:18.796460 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 13:24:18.796465 | orchestrator | Monday 02 June 2025 13:21:52 +0000 (0:00:00.671) 0:08:05.213 *********** 2025-06-02 13:24:18.796470 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796475 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796484 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796489 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796493 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796498 | orchestrator | 2025-06-02 13:24:18.796503 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 13:24:18.796508 | orchestrator | Monday 02 June 2025 13:21:53 +0000 (0:00:00.837) 0:08:06.051 *********** 2025-06-02 13:24:18.796512 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796517 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796522 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796526 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796531 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796539 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796543 | orchestrator | 2025-06-02 13:24:18.796548 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 13:24:18.796553 | orchestrator | Monday 02 June 2025 13:21:53 +0000 (0:00:00.603) 0:08:06.655 *********** 2025-06-02 13:24:18.796558 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796563 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796567 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796572 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796577 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796581 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796586 | orchestrator | 2025-06-02 13:24:18.796591 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 13:24:18.796595 | orchestrator | Monday 02 June 2025 13:21:54 +0000 (0:00:00.753) 0:08:07.408 *********** 2025-06-02 13:24:18.796600 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:24:18.796605 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:24:18.796610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:24:18.796614 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796619 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796624 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796629 | orchestrator | 2025-06-02 13:24:18.796633 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 13:24:18.796638 | orchestrator | Monday 02 June 2025 13:21:55 +0000 (0:00:00.593) 0:08:08.002 *********** 2025-06-02 13:24:18.796643 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796648 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.796652 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.796657 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.796662 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.796666 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.796671 | orchestrator | 2025-06-02 13:24:18.796676 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 13:24:18.796683 | orchestrator | Monday 02 June 2025 13:21:56 +0000 (0:00:00.771) 0:08:08.774 *********** 2025-06-02 13:24:18.796688 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796692 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.796697 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.796702 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796707 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796711 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796716 | orchestrator | 2025-06-02 13:24:18.796721 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 13:24:18.796725 | orchestrator | Monday 02 June 2025 13:21:56 +0000 (0:00:00.642) 0:08:09.417 *********** 2025-06-02 13:24:18.796730 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796735 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.796740 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.796744 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.796749 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.796754 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.796758 | orchestrator | 2025-06-02 13:24:18.796763 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-02 13:24:18.796768 | orchestrator | Monday 02 June 2025 13:21:58 +0000 (0:00:01.291) 0:08:10.708 *********** 2025-06-02 13:24:18.796773 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.796777 | orchestrator | 2025-06-02 13:24:18.796782 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-02 13:24:18.796787 | orchestrator | Monday 02 June 2025 13:22:02 +0000 (0:00:04.332) 0:08:15.041 *********** 2025-06-02 13:24:18.796792 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796796 | orchestrator | 2025-06-02 13:24:18.796801 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-02 13:24:18.796806 | orchestrator | Monday 02 June 2025 13:22:04 +0000 (0:00:01.928) 0:08:16.969 *********** 2025-06-02 13:24:18.796821 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.796827 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.796831 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.796836 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.796841 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.796845 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.796850 | orchestrator | 2025-06-02 13:24:18.796855 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-02 13:24:18.796860 | orchestrator | Monday 02 June 2025 13:22:06 +0000 (0:00:01.837) 0:08:18.807 *********** 2025-06-02 13:24:18.796864 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.796869 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.796874 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.796878 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.796883 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.796888 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.796892 | orchestrator | 2025-06-02 13:24:18.796897 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-02 13:24:18.796902 | orchestrator | Monday 02 June 2025 13:22:07 +0000 (0:00:00.983) 0:08:19.791 *********** 2025-06-02 13:24:18.796909 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.796914 | orchestrator | 2025-06-02 13:24:18.796919 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-02 13:24:18.796924 | orchestrator | Monday 02 June 2025 13:22:08 +0000 (0:00:01.323) 0:08:21.115 *********** 2025-06-02 13:24:18.796928 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.796933 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.796938 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.796942 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.796947 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.796952 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.796956 | orchestrator | 2025-06-02 13:24:18.796961 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-02 13:24:18.796966 | orchestrator | Monday 02 June 2025 13:22:10 +0000 (0:00:01.835) 0:08:22.950 *********** 2025-06-02 13:24:18.796971 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.796976 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.796980 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.796985 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.796990 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.796994 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.796999 | orchestrator | 2025-06-02 13:24:18.797004 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-02 13:24:18.797009 | orchestrator | Monday 02 June 2025 13:22:14 +0000 (0:00:04.351) 0:08:27.302 *********** 2025-06-02 13:24:18.797013 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.797018 | orchestrator | 2025-06-02 13:24:18.797023 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-02 13:24:18.797028 | orchestrator | Monday 02 June 2025 13:22:16 +0000 (0:00:01.414) 0:08:28.717 *********** 2025-06-02 13:24:18.797033 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.797037 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.797042 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.797047 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797052 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797056 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797061 | orchestrator | 2025-06-02 13:24:18.797066 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-02 13:24:18.797092 | orchestrator | Monday 02 June 2025 13:22:16 +0000 (0:00:00.900) 0:08:29.618 *********** 2025-06-02 13:24:18.797101 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:24:18.797105 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:24:18.797110 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:24:18.797115 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.797120 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.797124 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.797129 | orchestrator | 2025-06-02 13:24:18.797134 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-02 13:24:18.797139 | orchestrator | Monday 02 June 2025 13:22:19 +0000 (0:00:02.093) 0:08:31.711 *********** 2025-06-02 13:24:18.797146 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:24:18.797151 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:24:18.797156 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:24:18.797161 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797165 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797170 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797175 | orchestrator | 2025-06-02 13:24:18.797180 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-02 13:24:18.797184 | orchestrator | 2025-06-02 13:24:18.797189 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 13:24:18.797194 | orchestrator | Monday 02 June 2025 13:22:20 +0000 (0:00:01.079) 0:08:32.791 *********** 2025-06-02 13:24:18.797199 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.797203 | orchestrator | 2025-06-02 13:24:18.797208 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 13:24:18.797212 | orchestrator | Monday 02 June 2025 13:22:20 +0000 (0:00:00.551) 0:08:33.343 *********** 2025-06-02 13:24:18.797217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.797222 | orchestrator | 2025-06-02 13:24:18.797226 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 13:24:18.797231 | orchestrator | Monday 02 June 2025 13:22:21 +0000 (0:00:00.922) 0:08:34.265 *********** 2025-06-02 13:24:18.797235 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797239 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797244 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797248 | orchestrator | 2025-06-02 13:24:18.797253 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 13:24:18.797257 | orchestrator | Monday 02 June 2025 13:22:22 +0000 (0:00:00.449) 0:08:34.714 *********** 2025-06-02 13:24:18.797262 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797266 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797271 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797275 | orchestrator | 2025-06-02 13:24:18.797280 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 13:24:18.797284 | orchestrator | Monday 02 June 2025 13:22:22 +0000 (0:00:00.709) 0:08:35.424 *********** 2025-06-02 13:24:18.797289 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797293 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797298 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797302 | orchestrator | 2025-06-02 13:24:18.797307 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 13:24:18.797311 | orchestrator | Monday 02 June 2025 13:22:23 +0000 (0:00:01.043) 0:08:36.467 *********** 2025-06-02 13:24:18.797316 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797320 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797325 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797329 | orchestrator | 2025-06-02 13:24:18.797334 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 13:24:18.797342 | orchestrator | Monday 02 June 2025 13:22:24 +0000 (0:00:00.758) 0:08:37.226 *********** 2025-06-02 13:24:18.797347 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797351 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797359 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797363 | orchestrator | 2025-06-02 13:24:18.797368 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 13:24:18.797372 | orchestrator | Monday 02 June 2025 13:22:24 +0000 (0:00:00.309) 0:08:37.536 *********** 2025-06-02 13:24:18.797377 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797381 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797386 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797390 | orchestrator | 2025-06-02 13:24:18.797395 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 13:24:18.797399 | orchestrator | Monday 02 June 2025 13:22:25 +0000 (0:00:00.299) 0:08:37.836 *********** 2025-06-02 13:24:18.797404 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797408 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797412 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797417 | orchestrator | 2025-06-02 13:24:18.797421 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 13:24:18.797426 | orchestrator | Monday 02 June 2025 13:22:25 +0000 (0:00:00.424) 0:08:38.261 *********** 2025-06-02 13:24:18.797430 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797435 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797439 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797444 | orchestrator | 2025-06-02 13:24:18.797449 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 13:24:18.797453 | orchestrator | Monday 02 June 2025 13:22:26 +0000 (0:00:00.666) 0:08:38.927 *********** 2025-06-02 13:24:18.797458 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797462 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797467 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797471 | orchestrator | 2025-06-02 13:24:18.797476 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 13:24:18.797480 | orchestrator | Monday 02 June 2025 13:22:26 +0000 (0:00:00.672) 0:08:39.600 *********** 2025-06-02 13:24:18.797485 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797489 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797493 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797498 | orchestrator | 2025-06-02 13:24:18.797502 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 13:24:18.797508 | orchestrator | Monday 02 June 2025 13:22:27 +0000 (0:00:00.269) 0:08:39.870 *********** 2025-06-02 13:24:18.797516 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797523 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797535 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797543 | orchestrator | 2025-06-02 13:24:18.797551 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 13:24:18.797558 | orchestrator | Monday 02 June 2025 13:22:27 +0000 (0:00:00.496) 0:08:40.367 *********** 2025-06-02 13:24:18.797565 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797572 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797578 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797584 | orchestrator | 2025-06-02 13:24:18.797594 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 13:24:18.797601 | orchestrator | Monday 02 June 2025 13:22:28 +0000 (0:00:00.357) 0:08:40.724 *********** 2025-06-02 13:24:18.797607 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797613 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797620 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797626 | orchestrator | 2025-06-02 13:24:18.797633 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 13:24:18.797640 | orchestrator | Monday 02 June 2025 13:22:28 +0000 (0:00:00.401) 0:08:41.126 *********** 2025-06-02 13:24:18.797647 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797653 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797660 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797667 | orchestrator | 2025-06-02 13:24:18.797679 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 13:24:18.797686 | orchestrator | Monday 02 June 2025 13:22:28 +0000 (0:00:00.317) 0:08:41.444 *********** 2025-06-02 13:24:18.797693 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797700 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797707 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797715 | orchestrator | 2025-06-02 13:24:18.797722 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 13:24:18.797729 | orchestrator | Monday 02 June 2025 13:22:29 +0000 (0:00:00.416) 0:08:41.861 *********** 2025-06-02 13:24:18.797736 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797744 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797752 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797759 | orchestrator | 2025-06-02 13:24:18.797766 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 13:24:18.797774 | orchestrator | Monday 02 June 2025 13:22:29 +0000 (0:00:00.273) 0:08:42.134 *********** 2025-06-02 13:24:18.797780 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797784 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797789 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797793 | orchestrator | 2025-06-02 13:24:18.797798 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 13:24:18.797802 | orchestrator | Monday 02 June 2025 13:22:29 +0000 (0:00:00.250) 0:08:42.385 *********** 2025-06-02 13:24:18.797807 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797811 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797816 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797820 | orchestrator | 2025-06-02 13:24:18.797825 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 13:24:18.797829 | orchestrator | Monday 02 June 2025 13:22:29 +0000 (0:00:00.282) 0:08:42.668 *********** 2025-06-02 13:24:18.797834 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.797838 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.797843 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.797847 | orchestrator | 2025-06-02 13:24:18.797851 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-02 13:24:18.797856 | orchestrator | Monday 02 June 2025 13:22:30 +0000 (0:00:00.736) 0:08:43.405 *********** 2025-06-02 13:24:18.797860 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.797868 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.797873 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-02 13:24:18.797877 | orchestrator | 2025-06-02 13:24:18.797881 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-02 13:24:18.797886 | orchestrator | Monday 02 June 2025 13:22:31 +0000 (0:00:00.402) 0:08:43.807 *********** 2025-06-02 13:24:18.797890 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:24:18.797895 | orchestrator | 2025-06-02 13:24:18.797899 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-02 13:24:18.797904 | orchestrator | Monday 02 June 2025 13:22:33 +0000 (0:00:02.042) 0:08:45.850 *********** 2025-06-02 13:24:18.797909 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-02 13:24:18.797914 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.797918 | orchestrator | 2025-06-02 13:24:18.797923 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-02 13:24:18.797927 | orchestrator | Monday 02 June 2025 13:22:33 +0000 (0:00:00.226) 0:08:46.076 *********** 2025-06-02 13:24:18.797932 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 13:24:18.797944 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 13:24:18.797948 | orchestrator | 2025-06-02 13:24:18.797953 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-02 13:24:18.797957 | orchestrator | Monday 02 June 2025 13:22:41 +0000 (0:00:08.255) 0:08:54.332 *********** 2025-06-02 13:24:18.797962 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:24:18.797966 | orchestrator | 2025-06-02 13:24:18.797971 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-02 13:24:18.797975 | orchestrator | Monday 02 June 2025 13:22:45 +0000 (0:00:03.439) 0:08:57.771 *********** 2025-06-02 13:24:18.797980 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.797984 | orchestrator | 2025-06-02 13:24:18.797992 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-02 13:24:18.797997 | orchestrator | Monday 02 June 2025 13:22:45 +0000 (0:00:00.556) 0:08:58.328 *********** 2025-06-02 13:24:18.798001 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 13:24:18.798006 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 13:24:18.798010 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 13:24:18.798038 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-02 13:24:18.798043 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-02 13:24:18.798048 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-02 13:24:18.798053 | orchestrator | 2025-06-02 13:24:18.798061 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-02 13:24:18.798079 | orchestrator | Monday 02 June 2025 13:22:46 +0000 (0:00:01.012) 0:08:59.341 *********** 2025-06-02 13:24:18.798087 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.798094 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 13:24:18.798101 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 13:24:18.798108 | orchestrator | 2025-06-02 13:24:18.798115 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-02 13:24:18.798122 | orchestrator | Monday 02 June 2025 13:22:48 +0000 (0:00:02.261) 0:09:01.603 *********** 2025-06-02 13:24:18.798130 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 13:24:18.798137 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 13:24:18.798143 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798150 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 13:24:18.798157 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 13:24:18.798164 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798171 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 13:24:18.798178 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 13:24:18.798184 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798190 | orchestrator | 2025-06-02 13:24:18.798197 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-02 13:24:18.798204 | orchestrator | Monday 02 June 2025 13:22:50 +0000 (0:00:01.526) 0:09:03.130 *********** 2025-06-02 13:24:18.798211 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798219 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798227 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798234 | orchestrator | 2025-06-02 13:24:18.798242 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-02 13:24:18.798265 | orchestrator | Monday 02 June 2025 13:22:52 +0000 (0:00:02.570) 0:09:05.700 *********** 2025-06-02 13:24:18.798271 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798278 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.798283 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.798288 | orchestrator | 2025-06-02 13:24:18.798292 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-02 13:24:18.798297 | orchestrator | Monday 02 June 2025 13:22:53 +0000 (0:00:00.295) 0:09:05.995 *********** 2025-06-02 13:24:18.798301 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.798306 | orchestrator | 2025-06-02 13:24:18.798310 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-02 13:24:18.798315 | orchestrator | Monday 02 June 2025 13:22:54 +0000 (0:00:00.974) 0:09:06.970 *********** 2025-06-02 13:24:18.798319 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.798324 | orchestrator | 2025-06-02 13:24:18.798328 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-02 13:24:18.798333 | orchestrator | Monday 02 June 2025 13:22:54 +0000 (0:00:00.554) 0:09:07.525 *********** 2025-06-02 13:24:18.798337 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798342 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798346 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798350 | orchestrator | 2025-06-02 13:24:18.798355 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-02 13:24:18.798359 | orchestrator | Monday 02 June 2025 13:22:56 +0000 (0:00:01.239) 0:09:08.765 *********** 2025-06-02 13:24:18.798364 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798368 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798373 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798377 | orchestrator | 2025-06-02 13:24:18.798382 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-02 13:24:18.798386 | orchestrator | Monday 02 June 2025 13:22:57 +0000 (0:00:01.400) 0:09:10.165 *********** 2025-06-02 13:24:18.798390 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798395 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798399 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798404 | orchestrator | 2025-06-02 13:24:18.798408 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-02 13:24:18.798413 | orchestrator | Monday 02 June 2025 13:22:59 +0000 (0:00:01.751) 0:09:11.917 *********** 2025-06-02 13:24:18.798417 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798422 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798426 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798430 | orchestrator | 2025-06-02 13:24:18.798435 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-02 13:24:18.798439 | orchestrator | Monday 02 June 2025 13:23:01 +0000 (0:00:01.936) 0:09:13.853 *********** 2025-06-02 13:24:18.798444 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798449 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798453 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798458 | orchestrator | 2025-06-02 13:24:18.798467 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 13:24:18.798472 | orchestrator | Monday 02 June 2025 13:23:02 +0000 (0:00:01.357) 0:09:15.211 *********** 2025-06-02 13:24:18.798476 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798481 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798485 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798489 | orchestrator | 2025-06-02 13:24:18.798494 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 13:24:18.798498 | orchestrator | Monday 02 June 2025 13:23:03 +0000 (0:00:00.652) 0:09:15.864 *********** 2025-06-02 13:24:18.798503 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.798510 | orchestrator | 2025-06-02 13:24:18.798515 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 13:24:18.798519 | orchestrator | Monday 02 June 2025 13:23:03 +0000 (0:00:00.731) 0:09:16.595 *********** 2025-06-02 13:24:18.798524 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798528 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798533 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798537 | orchestrator | 2025-06-02 13:24:18.798542 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 13:24:18.798546 | orchestrator | Monday 02 June 2025 13:23:04 +0000 (0:00:00.309) 0:09:16.905 *********** 2025-06-02 13:24:18.798551 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.798555 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.798560 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.798564 | orchestrator | 2025-06-02 13:24:18.798569 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 13:24:18.798573 | orchestrator | Monday 02 June 2025 13:23:05 +0000 (0:00:01.171) 0:09:18.077 *********** 2025-06-02 13:24:18.798578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.798582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.798587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.798591 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798595 | orchestrator | 2025-06-02 13:24:18.798600 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 13:24:18.798604 | orchestrator | Monday 02 June 2025 13:23:06 +0000 (0:00:00.869) 0:09:18.946 *********** 2025-06-02 13:24:18.798609 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798613 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798618 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798622 | orchestrator | 2025-06-02 13:24:18.798627 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 13:24:18.798631 | orchestrator | 2025-06-02 13:24:18.798636 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 13:24:18.798640 | orchestrator | Monday 02 June 2025 13:23:06 +0000 (0:00:00.749) 0:09:19.695 *********** 2025-06-02 13:24:18.798645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.798650 | orchestrator | 2025-06-02 13:24:18.798657 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 13:24:18.798661 | orchestrator | Monday 02 June 2025 13:23:07 +0000 (0:00:00.515) 0:09:20.211 *********** 2025-06-02 13:24:18.798666 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.798670 | orchestrator | 2025-06-02 13:24:18.798675 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 13:24:18.798679 | orchestrator | Monday 02 June 2025 13:23:08 +0000 (0:00:00.800) 0:09:21.011 *********** 2025-06-02 13:24:18.798684 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798688 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.798693 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.798697 | orchestrator | 2025-06-02 13:24:18.798702 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 13:24:18.798706 | orchestrator | Monday 02 June 2025 13:23:08 +0000 (0:00:00.315) 0:09:21.327 *********** 2025-06-02 13:24:18.798711 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798715 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798720 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798724 | orchestrator | 2025-06-02 13:24:18.798728 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 13:24:18.798733 | orchestrator | Monday 02 June 2025 13:23:09 +0000 (0:00:00.699) 0:09:22.027 *********** 2025-06-02 13:24:18.798740 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798744 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798749 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798753 | orchestrator | 2025-06-02 13:24:18.798758 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 13:24:18.798762 | orchestrator | Monday 02 June 2025 13:23:10 +0000 (0:00:00.698) 0:09:22.725 *********** 2025-06-02 13:24:18.798767 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798771 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798776 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798780 | orchestrator | 2025-06-02 13:24:18.798785 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 13:24:18.798789 | orchestrator | Monday 02 June 2025 13:23:10 +0000 (0:00:00.958) 0:09:23.684 *********** 2025-06-02 13:24:18.798793 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798798 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.798802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.798807 | orchestrator | 2025-06-02 13:24:18.798811 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 13:24:18.798816 | orchestrator | Monday 02 June 2025 13:23:11 +0000 (0:00:00.295) 0:09:23.979 *********** 2025-06-02 13:24:18.798820 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798825 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.798829 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.798834 | orchestrator | 2025-06-02 13:24:18.798838 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 13:24:18.798845 | orchestrator | Monday 02 June 2025 13:23:11 +0000 (0:00:00.298) 0:09:24.278 *********** 2025-06-02 13:24:18.798850 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798854 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.798859 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.798863 | orchestrator | 2025-06-02 13:24:18.798868 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 13:24:18.798872 | orchestrator | Monday 02 June 2025 13:23:11 +0000 (0:00:00.294) 0:09:24.572 *********** 2025-06-02 13:24:18.798877 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798881 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798885 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798890 | orchestrator | 2025-06-02 13:24:18.798894 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 13:24:18.798900 | orchestrator | Monday 02 June 2025 13:23:12 +0000 (0:00:00.988) 0:09:25.561 *********** 2025-06-02 13:24:18.798907 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.798915 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.798921 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.798929 | orchestrator | 2025-06-02 13:24:18.798937 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 13:24:18.798944 | orchestrator | Monday 02 June 2025 13:23:13 +0000 (0:00:00.698) 0:09:26.260 *********** 2025-06-02 13:24:18.798952 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798958 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.798962 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.798967 | orchestrator | 2025-06-02 13:24:18.798971 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 13:24:18.798976 | orchestrator | Monday 02 June 2025 13:23:13 +0000 (0:00:00.325) 0:09:26.585 *********** 2025-06-02 13:24:18.798980 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.798984 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.798989 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.798993 | orchestrator | 2025-06-02 13:24:18.798998 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 13:24:18.799002 | orchestrator | Monday 02 June 2025 13:23:14 +0000 (0:00:00.316) 0:09:26.901 *********** 2025-06-02 13:24:18.799007 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.799011 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.799027 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.799032 | orchestrator | 2025-06-02 13:24:18.799037 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 13:24:18.799041 | orchestrator | Monday 02 June 2025 13:23:14 +0000 (0:00:00.707) 0:09:27.608 *********** 2025-06-02 13:24:18.799045 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.799050 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.799054 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.799059 | orchestrator | 2025-06-02 13:24:18.799063 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 13:24:18.799093 | orchestrator | Monday 02 June 2025 13:23:15 +0000 (0:00:00.356) 0:09:27.965 *********** 2025-06-02 13:24:18.799099 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.799103 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.799108 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.799112 | orchestrator | 2025-06-02 13:24:18.799119 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 13:24:18.799124 | orchestrator | Monday 02 June 2025 13:23:15 +0000 (0:00:00.339) 0:09:28.304 *********** 2025-06-02 13:24:18.799129 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799133 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799138 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799142 | orchestrator | 2025-06-02 13:24:18.799147 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 13:24:18.799151 | orchestrator | Monday 02 June 2025 13:23:15 +0000 (0:00:00.294) 0:09:28.599 *********** 2025-06-02 13:24:18.799156 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799160 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799165 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799169 | orchestrator | 2025-06-02 13:24:18.799174 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 13:24:18.799178 | orchestrator | Monday 02 June 2025 13:23:16 +0000 (0:00:00.530) 0:09:29.129 *********** 2025-06-02 13:24:18.799183 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799187 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799191 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799196 | orchestrator | 2025-06-02 13:24:18.799200 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 13:24:18.799205 | orchestrator | Monday 02 June 2025 13:23:16 +0000 (0:00:00.329) 0:09:29.459 *********** 2025-06-02 13:24:18.799209 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.799214 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.799218 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.799223 | orchestrator | 2025-06-02 13:24:18.799227 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 13:24:18.799231 | orchestrator | Monday 02 June 2025 13:23:17 +0000 (0:00:00.402) 0:09:29.862 *********** 2025-06-02 13:24:18.799236 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.799240 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.799245 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.799249 | orchestrator | 2025-06-02 13:24:18.799254 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-02 13:24:18.799258 | orchestrator | Monday 02 June 2025 13:23:17 +0000 (0:00:00.771) 0:09:30.633 *********** 2025-06-02 13:24:18.799263 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.799267 | orchestrator | 2025-06-02 13:24:18.799272 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 13:24:18.799276 | orchestrator | Monday 02 June 2025 13:23:18 +0000 (0:00:00.502) 0:09:31.136 *********** 2025-06-02 13:24:18.799281 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.799285 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 13:24:18.799290 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 13:24:18.799298 | orchestrator | 2025-06-02 13:24:18.799302 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 13:24:18.799310 | orchestrator | Monday 02 June 2025 13:23:20 +0000 (0:00:02.087) 0:09:33.224 *********** 2025-06-02 13:24:18.799315 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 13:24:18.799319 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 13:24:18.799324 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.799328 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 13:24:18.799333 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 13:24:18.799337 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.799342 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 13:24:18.799346 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 13:24:18.799351 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.799355 | orchestrator | 2025-06-02 13:24:18.799360 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-02 13:24:18.799364 | orchestrator | Monday 02 June 2025 13:23:21 +0000 (0:00:01.388) 0:09:34.612 *********** 2025-06-02 13:24:18.799369 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799373 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799377 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799382 | orchestrator | 2025-06-02 13:24:18.799386 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-02 13:24:18.799391 | orchestrator | Monday 02 June 2025 13:23:22 +0000 (0:00:00.297) 0:09:34.909 *********** 2025-06-02 13:24:18.799395 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.799400 | orchestrator | 2025-06-02 13:24:18.799404 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-02 13:24:18.799409 | orchestrator | Monday 02 June 2025 13:23:22 +0000 (0:00:00.525) 0:09:35.435 *********** 2025-06-02 13:24:18.799413 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.799418 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.799423 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.799427 | orchestrator | 2025-06-02 13:24:18.799432 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-02 13:24:18.799436 | orchestrator | Monday 02 June 2025 13:23:23 +0000 (0:00:01.107) 0:09:36.542 *********** 2025-06-02 13:24:18.799441 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.799448 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 13:24:18.799452 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.799457 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 13:24:18.799461 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.799466 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 13:24:18.799470 | orchestrator | 2025-06-02 13:24:18.799475 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 13:24:18.799479 | orchestrator | Monday 02 June 2025 13:23:28 +0000 (0:00:04.465) 0:09:41.008 *********** 2025-06-02 13:24:18.799483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.799491 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 13:24:18.799495 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.799500 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 13:24:18.799504 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:24:18.799509 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 13:24:18.799513 | orchestrator | 2025-06-02 13:24:18.799518 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 13:24:18.799522 | orchestrator | Monday 02 June 2025 13:23:30 +0000 (0:00:02.262) 0:09:43.270 *********** 2025-06-02 13:24:18.799526 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 13:24:18.799531 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.799535 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 13:24:18.799540 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.799544 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 13:24:18.799549 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.799553 | orchestrator | 2025-06-02 13:24:18.799558 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-02 13:24:18.799562 | orchestrator | Monday 02 June 2025 13:23:31 +0000 (0:00:01.133) 0:09:44.404 *********** 2025-06-02 13:24:18.799566 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-02 13:24:18.799570 | orchestrator | 2025-06-02 13:24:18.799574 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-02 13:24:18.799578 | orchestrator | Monday 02 June 2025 13:23:31 +0000 (0:00:00.224) 0:09:44.628 *********** 2025-06-02 13:24:18.799585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799605 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799609 | orchestrator | 2025-06-02 13:24:18.799614 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-02 13:24:18.799618 | orchestrator | Monday 02 June 2025 13:23:32 +0000 (0:00:00.867) 0:09:45.496 *********** 2025-06-02 13:24:18.799622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 13:24:18.799642 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799646 | orchestrator | 2025-06-02 13:24:18.799650 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-02 13:24:18.799655 | orchestrator | Monday 02 June 2025 13:23:33 +0000 (0:00:01.087) 0:09:46.583 *********** 2025-06-02 13:24:18.799664 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 13:24:18.799668 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 13:24:18.799674 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 13:24:18.799678 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 13:24:18.799683 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 13:24:18.799687 | orchestrator | 2025-06-02 13:24:18.799691 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-02 13:24:18.799695 | orchestrator | Monday 02 June 2025 13:24:05 +0000 (0:00:31.242) 0:10:17.826 *********** 2025-06-02 13:24:18.799699 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799703 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799707 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799711 | orchestrator | 2025-06-02 13:24:18.799715 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-02 13:24:18.799719 | orchestrator | Monday 02 June 2025 13:24:05 +0000 (0:00:00.309) 0:10:18.135 *********** 2025-06-02 13:24:18.799723 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799727 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799731 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799735 | orchestrator | 2025-06-02 13:24:18.799739 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-02 13:24:18.799743 | orchestrator | Monday 02 June 2025 13:24:05 +0000 (0:00:00.299) 0:10:18.434 *********** 2025-06-02 13:24:18.799747 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.799752 | orchestrator | 2025-06-02 13:24:18.799756 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-02 13:24:18.799760 | orchestrator | Monday 02 June 2025 13:24:06 +0000 (0:00:00.711) 0:10:19.146 *********** 2025-06-02 13:24:18.799764 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.799768 | orchestrator | 2025-06-02 13:24:18.799772 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-02 13:24:18.799776 | orchestrator | Monday 02 June 2025 13:24:06 +0000 (0:00:00.504) 0:10:19.650 *********** 2025-06-02 13:24:18.799780 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.799784 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.799788 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.799792 | orchestrator | 2025-06-02 13:24:18.799796 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-02 13:24:18.799800 | orchestrator | Monday 02 June 2025 13:24:08 +0000 (0:00:01.253) 0:10:20.904 *********** 2025-06-02 13:24:18.799806 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.799810 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.799814 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.799818 | orchestrator | 2025-06-02 13:24:18.799823 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-02 13:24:18.799827 | orchestrator | Monday 02 June 2025 13:24:09 +0000 (0:00:01.388) 0:10:22.292 *********** 2025-06-02 13:24:18.799831 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:24:18.799835 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:24:18.799839 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:24:18.799843 | orchestrator | 2025-06-02 13:24:18.799850 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-02 13:24:18.799854 | orchestrator | Monday 02 June 2025 13:24:11 +0000 (0:00:01.777) 0:10:24.070 *********** 2025-06-02 13:24:18.799858 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.799862 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.799866 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 13:24:18.799870 | orchestrator | 2025-06-02 13:24:18.799874 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 13:24:18.799878 | orchestrator | Monday 02 June 2025 13:24:13 +0000 (0:00:02.585) 0:10:26.656 *********** 2025-06-02 13:24:18.799882 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799886 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799891 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799895 | orchestrator | 2025-06-02 13:24:18.799899 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 13:24:18.799903 | orchestrator | Monday 02 June 2025 13:24:14 +0000 (0:00:00.339) 0:10:26.995 *********** 2025-06-02 13:24:18.799907 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:24:18.799911 | orchestrator | 2025-06-02 13:24:18.799915 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 13:24:18.799919 | orchestrator | Monday 02 June 2025 13:24:14 +0000 (0:00:00.493) 0:10:27.489 *********** 2025-06-02 13:24:18.799923 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.799927 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.799931 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.799935 | orchestrator | 2025-06-02 13:24:18.799939 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 13:24:18.799943 | orchestrator | Monday 02 June 2025 13:24:15 +0000 (0:00:00.550) 0:10:28.039 *********** 2025-06-02 13:24:18.799947 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799951 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:24:18.799958 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:24:18.799962 | orchestrator | 2025-06-02 13:24:18.799966 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 13:24:18.799970 | orchestrator | Monday 02 June 2025 13:24:15 +0000 (0:00:00.324) 0:10:28.364 *********** 2025-06-02 13:24:18.799974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:24:18.799978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:24:18.799982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:24:18.799986 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:24:18.799990 | orchestrator | 2025-06-02 13:24:18.799994 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 13:24:18.799998 | orchestrator | Monday 02 June 2025 13:24:16 +0000 (0:00:00.608) 0:10:28.972 *********** 2025-06-02 13:24:18.800002 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:24:18.800006 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:24:18.800010 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:24:18.800014 | orchestrator | 2025-06-02 13:24:18.800018 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:24:18.800023 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-02 13:24:18.800027 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-02 13:24:18.800031 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-02 13:24:18.800038 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-02 13:24:18.800042 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-02 13:24:18.800046 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-02 13:24:18.800050 | orchestrator | 2025-06-02 13:24:18.800054 | orchestrator | 2025-06-02 13:24:18.800058 | orchestrator | 2025-06-02 13:24:18.800062 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:24:18.800066 | orchestrator | Monday 02 June 2025 13:24:16 +0000 (0:00:00.234) 0:10:29.206 *********** 2025-06-02 13:24:18.800081 | orchestrator | =============================================================================== 2025-06-02 13:24:18.800085 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.25s 2025-06-02 13:24:18.800091 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.98s 2025-06-02 13:24:18.800095 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.24s 2025-06-02 13:24:18.800099 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.04s 2025-06-02 13:24:18.800103 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.82s 2025-06-02 13:24:18.800107 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.15s 2025-06-02 13:24:18.800112 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.03s 2025-06-02 13:24:18.800116 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 12.47s 2025-06-02 13:24:18.800120 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.14s 2025-06-02 13:24:18.800124 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.26s 2025-06-02 13:24:18.800128 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.50s 2025-06-02 13:24:18.800132 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.19s 2025-06-02 13:24:18.800136 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.54s 2025-06-02 13:24:18.800140 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.47s 2025-06-02 13:24:18.800144 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.35s 2025-06-02 13:24:18.800148 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.33s 2025-06-02 13:24:18.800152 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.67s 2025-06-02 13:24:18.800156 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.44s 2025-06-02 13:24:18.800160 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.32s 2025-06-02 13:24:18.800164 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.21s 2025-06-02 13:24:18.800168 | orchestrator | 2025-06-02 13:24:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:21.831820 | orchestrator | 2025-06-02 13:24:21 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:21.836116 | orchestrator | 2025-06-02 13:24:21 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:21.836175 | orchestrator | 2025-06-02 13:24:21 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:21.836199 | orchestrator | 2025-06-02 13:24:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:24.877001 | orchestrator | 2025-06-02 13:24:24 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:24.878358 | orchestrator | 2025-06-02 13:24:24 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:24.880548 | orchestrator | 2025-06-02 13:24:24 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:24.880995 | orchestrator | 2025-06-02 13:24:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:27.932833 | orchestrator | 2025-06-02 13:24:27 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:27.933297 | orchestrator | 2025-06-02 13:24:27 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:27.935755 | orchestrator | 2025-06-02 13:24:27 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:27.935785 | orchestrator | 2025-06-02 13:24:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:30.987509 | orchestrator | 2025-06-02 13:24:30 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:30.988505 | orchestrator | 2025-06-02 13:24:30 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:30.990198 | orchestrator | 2025-06-02 13:24:30 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:30.990230 | orchestrator | 2025-06-02 13:24:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:34.058588 | orchestrator | 2025-06-02 13:24:34 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:34.060934 | orchestrator | 2025-06-02 13:24:34 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:34.064381 | orchestrator | 2025-06-02 13:24:34 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:34.064467 | orchestrator | 2025-06-02 13:24:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:37.128795 | orchestrator | 2025-06-02 13:24:37 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:37.129972 | orchestrator | 2025-06-02 13:24:37 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:37.130965 | orchestrator | 2025-06-02 13:24:37 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:37.130996 | orchestrator | 2025-06-02 13:24:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:40.188385 | orchestrator | 2025-06-02 13:24:40 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:40.190239 | orchestrator | 2025-06-02 13:24:40 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:40.192023 | orchestrator | 2025-06-02 13:24:40 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:40.192077 | orchestrator | 2025-06-02 13:24:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:43.238869 | orchestrator | 2025-06-02 13:24:43 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:43.240861 | orchestrator | 2025-06-02 13:24:43 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:43.242407 | orchestrator | 2025-06-02 13:24:43 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:43.242457 | orchestrator | 2025-06-02 13:24:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:46.290869 | orchestrator | 2025-06-02 13:24:46 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:46.292855 | orchestrator | 2025-06-02 13:24:46 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:46.295270 | orchestrator | 2025-06-02 13:24:46 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:46.295388 | orchestrator | 2025-06-02 13:24:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:49.342208 | orchestrator | 2025-06-02 13:24:49 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:49.344232 | orchestrator | 2025-06-02 13:24:49 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:49.346778 | orchestrator | 2025-06-02 13:24:49 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:49.346946 | orchestrator | 2025-06-02 13:24:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:52.398417 | orchestrator | 2025-06-02 13:24:52 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:52.400217 | orchestrator | 2025-06-02 13:24:52 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:52.402334 | orchestrator | 2025-06-02 13:24:52 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:52.402373 | orchestrator | 2025-06-02 13:24:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:55.453535 | orchestrator | 2025-06-02 13:24:55 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:55.455585 | orchestrator | 2025-06-02 13:24:55 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:55.456963 | orchestrator | 2025-06-02 13:24:55 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:55.457002 | orchestrator | 2025-06-02 13:24:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:24:58.503258 | orchestrator | 2025-06-02 13:24:58 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:24:58.504904 | orchestrator | 2025-06-02 13:24:58 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:24:58.506425 | orchestrator | 2025-06-02 13:24:58 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:24:58.506633 | orchestrator | 2025-06-02 13:24:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:01.565634 | orchestrator | 2025-06-02 13:25:01 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:25:01.567264 | orchestrator | 2025-06-02 13:25:01 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:01.569170 | orchestrator | 2025-06-02 13:25:01 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:01.569196 | orchestrator | 2025-06-02 13:25:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:04.621951 | orchestrator | 2025-06-02 13:25:04 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:25:04.623219 | orchestrator | 2025-06-02 13:25:04 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:04.624548 | orchestrator | 2025-06-02 13:25:04 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:04.624867 | orchestrator | 2025-06-02 13:25:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:07.670630 | orchestrator | 2025-06-02 13:25:07 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state STARTED 2025-06-02 13:25:07.673550 | orchestrator | 2025-06-02 13:25:07 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:07.675732 | orchestrator | 2025-06-02 13:25:07 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:07.675935 | orchestrator | 2025-06-02 13:25:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:10.727445 | orchestrator | 2025-06-02 13:25:10 | INFO  | Task c3626778-cdc3-4ed9-85cf-e1ae5389bfa6 is in state SUCCESS 2025-06-02 13:25:10.728575 | orchestrator | 2025-06-02 13:25:10.728601 | orchestrator | 2025-06-02 13:25:10.728608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:25:10.728615 | orchestrator | 2025-06-02 13:25:10.728620 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:25:10.728626 | orchestrator | Monday 02 June 2025 13:22:17 +0000 (0:00:00.342) 0:00:00.343 *********** 2025-06-02 13:25:10.728632 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:10.728638 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:10.728644 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:10.728649 | orchestrator | 2025-06-02 13:25:10.728654 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:25:10.728660 | orchestrator | Monday 02 June 2025 13:22:18 +0000 (0:00:00.311) 0:00:00.654 *********** 2025-06-02 13:25:10.728665 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-02 13:25:10.728671 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-02 13:25:10.728675 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-02 13:25:10.728681 | orchestrator | 2025-06-02 13:25:10.728686 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-02 13:25:10.728691 | orchestrator | 2025-06-02 13:25:10.728696 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 13:25:10.728701 | orchestrator | Monday 02 June 2025 13:22:18 +0000 (0:00:00.394) 0:00:01.049 *********** 2025-06-02 13:25:10.728706 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:25:10.728711 | orchestrator | 2025-06-02 13:25:10.728716 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-02 13:25:10.728733 | orchestrator | Monday 02 June 2025 13:22:18 +0000 (0:00:00.481) 0:00:01.530 *********** 2025-06-02 13:25:10.728739 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:25:10.728744 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:25:10.728749 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 13:25:10.728754 | orchestrator | 2025-06-02 13:25:10.728759 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-02 13:25:10.728764 | orchestrator | Monday 02 June 2025 13:22:20 +0000 (0:00:01.834) 0:00:03.364 *********** 2025-06-02 13:25:10.728773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.728782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.728811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.728820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.728831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.728838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.728848 | orchestrator | 2025-06-02 13:25:10.728854 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 13:25:10.728859 | orchestrator | Monday 02 June 2025 13:22:23 +0000 (0:00:02.509) 0:00:05.874 *********** 2025-06-02 13:25:10.728864 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:25:10.728869 | orchestrator | 2025-06-02 13:25:10.728874 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-02 13:25:10.728879 | orchestrator | Monday 02 June 2025 13:22:24 +0000 (0:00:00.867) 0:00:06.742 *********** 2025-06-02 13:25:10.728889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.728898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.728904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.728910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729031 | orchestrator | 2025-06-02 13:25:10.729037 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-02 13:25:10.729042 | orchestrator | Monday 02 June 2025 13:22:27 +0000 (0:00:03.017) 0:00:09.759 *********** 2025-06-02 13:25:10.729048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:25:10.729059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:25:10.729065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:10.729074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:25:10.729083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:25:10.729089 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:10.729095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:25:10.729144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:25:10.729151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:10.729156 | orchestrator | 2025-06-02 13:25:10.729161 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-02 13:25:10.729166 | orchestrator | Monday 02 June 2025 13:22:28 +0000 (0:00:01.668) 0:00:11.427 *********** 2025-06-02 13:25:10.729175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:25:10.729225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:25:10.729232 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:10.729238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:25:10.729248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:25:10.729254 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:10.729263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 13:25:10.729269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 13:25:10.729274 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:10.729279 | orchestrator | 2025-06-02 13:25:10.729287 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-02 13:25:10.729296 | orchestrator | Monday 02 June 2025 13:22:29 +0000 (0:00:01.019) 0:00:12.447 *********** 2025-06-02 13:25:10.729301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.729315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.729321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.729330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729356 | orchestrator | 2025-06-02 13:25:10.729361 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-02 13:25:10.729367 | orchestrator | Monday 02 June 2025 13:22:32 +0000 (0:00:03.086) 0:00:15.533 *********** 2025-06-02 13:25:10.729372 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:10.729377 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:10.729382 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:10.729387 | orchestrator | 2025-06-02 13:25:10.729392 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-02 13:25:10.729397 | orchestrator | Monday 02 June 2025 13:22:35 +0000 (0:00:02.647) 0:00:18.181 *********** 2025-06-02 13:25:10.729402 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:10.729407 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:10.729412 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:10.729417 | orchestrator | 2025-06-02 13:25:10.729422 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-02 13:25:10.729427 | orchestrator | Monday 02 June 2025 13:22:37 +0000 (0:00:01.788) 0:00:19.969 *********** 2025-06-02 13:25:10.729437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.729445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.729455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 13:25:10.729460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 13:25:10.729505 | orchestrator | 2025-06-02 13:25:10.729515 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 13:25:10.729521 | orchestrator | Monday 02 June 2025 13:22:39 +0000 (0:00:01.863) 0:00:21.833 *********** 2025-06-02 13:25:10.729526 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:10.729531 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:10.729536 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:10.729541 | orchestrator | 2025-06-02 13:25:10.729546 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 13:25:10.729551 | orchestrator | Monday 02 June 2025 13:22:39 +0000 (0:00:00.295) 0:00:22.128 *********** 2025-06-02 13:25:10.729556 | orchestrator | 2025-06-02 13:25:10.729561 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 13:25:10.729566 | orchestrator | Monday 02 June 2025 13:22:39 +0000 (0:00:00.063) 0:00:22.191 *********** 2025-06-02 13:25:10.729571 | orchestrator | 2025-06-02 13:25:10.729576 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 13:25:10.729581 | orchestrator | Monday 02 June 2025 13:22:39 +0000 (0:00:00.062) 0:00:22.254 *********** 2025-06-02 13:25:10.729586 | orchestrator | 2025-06-02 13:25:10.729591 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-02 13:25:10.729596 | orchestrator | Monday 02 June 2025 13:22:39 +0000 (0:00:00.304) 0:00:22.559 *********** 2025-06-02 13:25:10.729601 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:10.729606 | orchestrator | 2025-06-02 13:25:10.729611 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-02 13:25:10.729616 | orchestrator | Monday 02 June 2025 13:22:40 +0000 (0:00:00.217) 0:00:22.776 *********** 2025-06-02 13:25:10.729621 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:10.729626 | orchestrator | 2025-06-02 13:25:10.729631 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-02 13:25:10.729637 | orchestrator | Monday 02 June 2025 13:22:40 +0000 (0:00:00.217) 0:00:22.993 *********** 2025-06-02 13:25:10.729642 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:10.729647 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:10.729652 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:10.729657 | orchestrator | 2025-06-02 13:25:10.729662 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-02 13:25:10.729667 | orchestrator | Monday 02 June 2025 13:23:43 +0000 (0:01:03.268) 0:01:26.262 *********** 2025-06-02 13:25:10.729672 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:10.729677 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:10.729682 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:10.729687 | orchestrator | 2025-06-02 13:25:10.729692 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 13:25:10.729697 | orchestrator | Monday 02 June 2025 13:24:59 +0000 (0:01:15.487) 0:02:41.750 *********** 2025-06-02 13:25:10.729702 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:25:10.729709 | orchestrator | 2025-06-02 13:25:10.729716 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-02 13:25:10.729721 | orchestrator | Monday 02 June 2025 13:24:59 +0000 (0:00:00.799) 0:02:42.549 *********** 2025-06-02 13:25:10.729726 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:10.729732 | orchestrator | 2025-06-02 13:25:10.729740 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-02 13:25:10.729754 | orchestrator | Monday 02 June 2025 13:25:02 +0000 (0:00:02.418) 0:02:44.968 *********** 2025-06-02 13:25:10.729762 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:10.729771 | orchestrator | 2025-06-02 13:25:10.729780 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-02 13:25:10.729788 | orchestrator | Monday 02 June 2025 13:25:04 +0000 (0:00:02.213) 0:02:47.182 *********** 2025-06-02 13:25:10.729796 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:10.729805 | orchestrator | 2025-06-02 13:25:10.729812 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-02 13:25:10.729817 | orchestrator | Monday 02 June 2025 13:25:07 +0000 (0:00:02.695) 0:02:49.877 *********** 2025-06-02 13:25:10.729822 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:10.729827 | orchestrator | 2025-06-02 13:25:10.729835 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:25:10.729841 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:25:10.729850 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 13:25:10.729859 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 13:25:10.729868 | orchestrator | 2025-06-02 13:25:10.729877 | orchestrator | 2025-06-02 13:25:10.729883 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:25:10.729889 | orchestrator | Monday 02 June 2025 13:25:09 +0000 (0:00:02.502) 0:02:52.379 *********** 2025-06-02 13:25:10.729895 | orchestrator | =============================================================================== 2025-06-02 13:25:10.729901 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.49s 2025-06-02 13:25:10.729907 | orchestrator | opensearch : Restart opensearch container ------------------------------ 63.27s 2025-06-02 13:25:10.729913 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.09s 2025-06-02 13:25:10.729919 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.02s 2025-06-02 13:25:10.729925 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-06-02 13:25:10.729934 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.65s 2025-06-02 13:25:10.729940 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.51s 2025-06-02 13:25:10.729945 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.50s 2025-06-02 13:25:10.729951 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.42s 2025-06-02 13:25:10.729957 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2025-06-02 13:25:10.729963 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.86s 2025-06-02 13:25:10.729969 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.83s 2025-06-02 13:25:10.729975 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.79s 2025-06-02 13:25:10.729980 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.67s 2025-06-02 13:25:10.729986 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.02s 2025-06-02 13:25:10.729992 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.87s 2025-06-02 13:25:10.729998 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.80s 2025-06-02 13:25:10.730004 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-06-02 13:25:10.730010 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.43s 2025-06-02 13:25:10.730073 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-06-02 13:25:10.730085 | orchestrator | 2025-06-02 13:25:10 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:10.731631 | orchestrator | 2025-06-02 13:25:10 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:10.731720 | orchestrator | 2025-06-02 13:25:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:13.783050 | orchestrator | 2025-06-02 13:25:13 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:13.784006 | orchestrator | 2025-06-02 13:25:13 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:13.784036 | orchestrator | 2025-06-02 13:25:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:16.839791 | orchestrator | 2025-06-02 13:25:16 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:16.841623 | orchestrator | 2025-06-02 13:25:16 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:16.841802 | orchestrator | 2025-06-02 13:25:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:19.884957 | orchestrator | 2025-06-02 13:25:19 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:19.886399 | orchestrator | 2025-06-02 13:25:19 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:19.886524 | orchestrator | 2025-06-02 13:25:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:22.954981 | orchestrator | 2025-06-02 13:25:22 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state STARTED 2025-06-02 13:25:22.957049 | orchestrator | 2025-06-02 13:25:22 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:22.957086 | orchestrator | 2025-06-02 13:25:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:26.020739 | orchestrator | 2025-06-02 13:25:26 | INFO  | Task 9e1cff6d-674f-4469-8fdf-2d6b1a16e6fa is in state SUCCESS 2025-06-02 13:25:26.021896 | orchestrator | 2025-06-02 13:25:26.021934 | orchestrator | 2025-06-02 13:25:26.021947 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-02 13:25:26.021959 | orchestrator | 2025-06-02 13:25:26.021970 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 13:25:26.021981 | orchestrator | Monday 02 June 2025 13:22:17 +0000 (0:00:00.112) 0:00:00.112 *********** 2025-06-02 13:25:26.021992 | orchestrator | ok: [localhost] => { 2025-06-02 13:25:26.022005 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-02 13:25:26.022164 | orchestrator | } 2025-06-02 13:25:26.022178 | orchestrator | 2025-06-02 13:25:26.022190 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-02 13:25:26.022203 | orchestrator | Monday 02 June 2025 13:22:17 +0000 (0:00:00.046) 0:00:00.158 *********** 2025-06-02 13:25:26.022214 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-02 13:25:26.022228 | orchestrator | ...ignoring 2025-06-02 13:25:26.022239 | orchestrator | 2025-06-02 13:25:26.022251 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-02 13:25:26.022263 | orchestrator | Monday 02 June 2025 13:22:20 +0000 (0:00:02.864) 0:00:03.023 *********** 2025-06-02 13:25:26.022275 | orchestrator | skipping: [localhost] 2025-06-02 13:25:26.022286 | orchestrator | 2025-06-02 13:25:26.022298 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-02 13:25:26.022310 | orchestrator | Monday 02 June 2025 13:22:20 +0000 (0:00:00.065) 0:00:03.088 *********** 2025-06-02 13:25:26.022337 | orchestrator | ok: [localhost] 2025-06-02 13:25:26.022371 | orchestrator | 2025-06-02 13:25:26.022383 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:25:26.022393 | orchestrator | 2025-06-02 13:25:26.022404 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:25:26.022415 | orchestrator | Monday 02 June 2025 13:22:20 +0000 (0:00:00.178) 0:00:03.266 *********** 2025-06-02 13:25:26.022426 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.022436 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.022447 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.022458 | orchestrator | 2025-06-02 13:25:26.022469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:25:26.022480 | orchestrator | Monday 02 June 2025 13:22:20 +0000 (0:00:00.405) 0:00:03.672 *********** 2025-06-02 13:25:26.022493 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 13:25:26.022506 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 13:25:26.022518 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 13:25:26.022530 | orchestrator | 2025-06-02 13:25:26.022543 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 13:25:26.022555 | orchestrator | 2025-06-02 13:25:26.022568 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 13:25:26.022581 | orchestrator | Monday 02 June 2025 13:22:22 +0000 (0:00:01.259) 0:00:04.931 *********** 2025-06-02 13:25:26.022592 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:25:26.022603 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 13:25:26.022614 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 13:25:26.022624 | orchestrator | 2025-06-02 13:25:26.022635 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 13:25:26.022646 | orchestrator | Monday 02 June 2025 13:22:22 +0000 (0:00:00.369) 0:00:05.301 *********** 2025-06-02 13:25:26.022656 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:25:26.022668 | orchestrator | 2025-06-02 13:25:26.022679 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-02 13:25:26.022689 | orchestrator | Monday 02 June 2025 13:22:23 +0000 (0:00:00.595) 0:00:05.896 *********** 2025-06-02 13:25:26.022723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.022754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.022768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.022780 | orchestrator | 2025-06-02 13:25:26.022799 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-02 13:25:26.022810 | orchestrator | Monday 02 June 2025 13:22:27 +0000 (0:00:03.808) 0:00:09.705 *********** 2025-06-02 13:25:26.022828 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.022839 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.022850 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.022861 | orchestrator | 2025-06-02 13:25:26.022872 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-02 13:25:26.022882 | orchestrator | Monday 02 June 2025 13:22:27 +0000 (0:00:00.953) 0:00:10.659 *********** 2025-06-02 13:25:26.022893 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.022903 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.022914 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.022925 | orchestrator | 2025-06-02 13:25:26.022935 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-02 13:25:26.022946 | orchestrator | Monday 02 June 2025 13:22:29 +0000 (0:00:01.665) 0:00:12.324 *********** 2025-06-02 13:25:26.022963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.022983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.023008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.023021 | orchestrator | 2025-06-02 13:25:26.023032 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-02 13:25:26.023043 | orchestrator | Monday 02 June 2025 13:22:33 +0000 (0:00:04.059) 0:00:16.383 *********** 2025-06-02 13:25:26.023053 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.023064 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.023075 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.023085 | orchestrator | 2025-06-02 13:25:26.023096 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-02 13:25:26.023106 | orchestrator | Monday 02 June 2025 13:22:34 +0000 (0:00:01.184) 0:00:17.568 *********** 2025-06-02 13:25:26.023181 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.023195 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:26.023205 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:26.023216 | orchestrator | 2025-06-02 13:25:26.023227 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 13:25:26.023238 | orchestrator | Monday 02 June 2025 13:22:39 +0000 (0:00:04.153) 0:00:21.722 *********** 2025-06-02 13:25:26.023249 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:25:26.023259 | orchestrator | 2025-06-02 13:25:26.023270 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 13:25:26.023281 | orchestrator | Monday 02 June 2025 13:22:39 +0000 (0:00:00.531) 0:00:22.253 *********** 2025-06-02 13:25:26.023303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023324 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.023342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023355 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.023374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023393 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.023403 | orchestrator | 2025-06-02 13:25:26.023412 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 13:25:26.023422 | orchestrator | Monday 02 June 2025 13:22:42 +0000 (0:00:02.816) 0:00:25.070 *********** 2025-06-02 13:25:26.023436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023448 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.023466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023483 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.023498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023510 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.023520 | orchestrator | 2025-06-02 13:25:26.023529 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 13:25:26.023539 | orchestrator | Monday 02 June 2025 13:22:44 +0000 (0:00:02.115) 0:00:27.185 *********** 2025-06-02 13:25:26.023556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.023594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.023616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 13:25:26.023633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.023643 | orchestrator | 2025-06-02 13:25:26.023653 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-02 13:25:26.023663 | orchestrator | Monday 02 June 2025 13:22:47 +0000 (0:00:02.578) 0:00:29.763 *********** 2025-06-02 13:25:26.023883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.023903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.023936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 13:25:26.023949 | orchestrator | 2025-06-02 13:25:26.023958 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-02 13:25:26.023968 | orchestrator | Monday 02 June 2025 13:22:50 +0000 (0:00:03.641) 0:00:33.405 *********** 2025-06-02 13:25:26.023978 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.023987 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:26.023997 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:26.024006 | orchestrator | 2025-06-02 13:25:26.024016 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-02 13:25:26.024025 | orchestrator | Monday 02 June 2025 13:22:51 +0000 (0:00:01.134) 0:00:34.539 *********** 2025-06-02 13:25:26.024035 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.024045 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.024054 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.024064 | orchestrator | 2025-06-02 13:25:26.024074 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-02 13:25:26.024089 | orchestrator | Monday 02 June 2025 13:22:52 +0000 (0:00:00.310) 0:00:34.850 *********** 2025-06-02 13:25:26.024099 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.024109 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.024137 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.024147 | orchestrator | 2025-06-02 13:25:26.024156 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-02 13:25:26.024166 | orchestrator | Monday 02 June 2025 13:22:52 +0000 (0:00:00.339) 0:00:35.189 *********** 2025-06-02 13:25:26.024176 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-02 13:25:26.024186 | orchestrator | ...ignoring 2025-06-02 13:25:26.024196 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-02 13:25:26.024205 | orchestrator | ...ignoring 2025-06-02 13:25:26.024215 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-02 13:25:26.024225 | orchestrator | ...ignoring 2025-06-02 13:25:26.024234 | orchestrator | 2025-06-02 13:25:26.024244 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-02 13:25:26.024253 | orchestrator | Monday 02 June 2025 13:23:03 +0000 (0:00:10.824) 0:00:46.014 *********** 2025-06-02 13:25:26.024263 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.024273 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.024282 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.024291 | orchestrator | 2025-06-02 13:25:26.024301 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-02 13:25:26.024311 | orchestrator | Monday 02 June 2025 13:23:03 +0000 (0:00:00.634) 0:00:46.648 *********** 2025-06-02 13:25:26.024320 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.024330 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.024339 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.024349 | orchestrator | 2025-06-02 13:25:26.024358 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-02 13:25:26.024368 | orchestrator | Monday 02 June 2025 13:23:04 +0000 (0:00:00.378) 0:00:47.027 *********** 2025-06-02 13:25:26.024377 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.024387 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.024396 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.024406 | orchestrator | 2025-06-02 13:25:26.024416 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-02 13:25:26.024425 | orchestrator | Monday 02 June 2025 13:23:04 +0000 (0:00:00.422) 0:00:47.449 *********** 2025-06-02 13:25:26.024435 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.024444 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.024454 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.024463 | orchestrator | 2025-06-02 13:25:26.024473 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-02 13:25:26.024488 | orchestrator | Monday 02 June 2025 13:23:05 +0000 (0:00:00.429) 0:00:47.878 *********** 2025-06-02 13:25:26.024498 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.024507 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.024517 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.024527 | orchestrator | 2025-06-02 13:25:26.024536 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-02 13:25:26.024546 | orchestrator | Monday 02 June 2025 13:23:05 +0000 (0:00:00.618) 0:00:48.497 *********** 2025-06-02 13:25:26.024555 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.024565 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.024574 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.024584 | orchestrator | 2025-06-02 13:25:26.024593 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 13:25:26.024612 | orchestrator | Monday 02 June 2025 13:23:06 +0000 (0:00:00.403) 0:00:48.900 *********** 2025-06-02 13:25:26.024621 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.024631 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.024641 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-02 13:25:26.024650 | orchestrator | 2025-06-02 13:25:26.024660 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-02 13:25:26.024669 | orchestrator | Monday 02 June 2025 13:23:06 +0000 (0:00:00.353) 0:00:49.254 *********** 2025-06-02 13:25:26.024679 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.024688 | orchestrator | 2025-06-02 13:25:26.024698 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-02 13:25:26.024708 | orchestrator | Monday 02 June 2025 13:23:16 +0000 (0:00:09.788) 0:00:59.043 *********** 2025-06-02 13:25:26.024722 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.024732 | orchestrator | 2025-06-02 13:25:26.024742 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 13:25:26.024752 | orchestrator | Monday 02 June 2025 13:23:16 +0000 (0:00:00.122) 0:00:59.166 *********** 2025-06-02 13:25:26.024761 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.024771 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.024781 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.024790 | orchestrator | 2025-06-02 13:25:26.024800 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-02 13:25:26.024809 | orchestrator | Monday 02 June 2025 13:23:17 +0000 (0:00:00.962) 0:01:00.128 *********** 2025-06-02 13:25:26.024819 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.024829 | orchestrator | 2025-06-02 13:25:26.024838 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-02 13:25:26.024848 | orchestrator | Monday 02 June 2025 13:23:25 +0000 (0:00:07.769) 0:01:07.898 *********** 2025-06-02 13:25:26.024858 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.024867 | orchestrator | 2025-06-02 13:25:26.024877 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-02 13:25:26.024886 | orchestrator | Monday 02 June 2025 13:23:26 +0000 (0:00:01.575) 0:01:09.473 *********** 2025-06-02 13:25:26.024896 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.024905 | orchestrator | 2025-06-02 13:25:26.024915 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-02 13:25:26.024925 | orchestrator | Monday 02 June 2025 13:23:29 +0000 (0:00:02.430) 0:01:11.904 *********** 2025-06-02 13:25:26.024934 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.024944 | orchestrator | 2025-06-02 13:25:26.024954 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-02 13:25:26.024963 | orchestrator | Monday 02 June 2025 13:23:29 +0000 (0:00:00.126) 0:01:12.030 *********** 2025-06-02 13:25:26.024973 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.024996 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.025006 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.025016 | orchestrator | 2025-06-02 13:25:26.025025 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-02 13:25:26.025035 | orchestrator | Monday 02 June 2025 13:23:29 +0000 (0:00:00.468) 0:01:12.499 *********** 2025-06-02 13:25:26.025044 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.025054 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 13:25:26.025064 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:26.025073 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:26.025083 | orchestrator | 2025-06-02 13:25:26.025092 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 13:25:26.025102 | orchestrator | skipping: no hosts matched 2025-06-02 13:25:26.025111 | orchestrator | 2025-06-02 13:25:26.025154 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 13:25:26.025170 | orchestrator | 2025-06-02 13:25:26.025180 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 13:25:26.025190 | orchestrator | Monday 02 June 2025 13:23:30 +0000 (0:00:00.304) 0:01:12.804 *********** 2025-06-02 13:25:26.025199 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:25:26.025209 | orchestrator | 2025-06-02 13:25:26.025218 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 13:25:26.025228 | orchestrator | Monday 02 June 2025 13:23:49 +0000 (0:00:19.145) 0:01:31.949 *********** 2025-06-02 13:25:26.025237 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.025247 | orchestrator | 2025-06-02 13:25:26.025257 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 13:25:26.025266 | orchestrator | Monday 02 June 2025 13:24:09 +0000 (0:00:20.608) 0:01:52.558 *********** 2025-06-02 13:25:26.025276 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.025285 | orchestrator | 2025-06-02 13:25:26.025295 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 13:25:26.025304 | orchestrator | 2025-06-02 13:25:26.025314 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 13:25:26.025324 | orchestrator | Monday 02 June 2025 13:24:12 +0000 (0:00:02.337) 0:01:54.895 *********** 2025-06-02 13:25:26.025334 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:25:26.025343 | orchestrator | 2025-06-02 13:25:26.025353 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 13:25:26.025368 | orchestrator | Monday 02 June 2025 13:24:30 +0000 (0:00:17.975) 0:02:12.871 *********** 2025-06-02 13:25:26.025378 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.025388 | orchestrator | 2025-06-02 13:25:26.025398 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 13:25:26.025407 | orchestrator | Monday 02 June 2025 13:24:50 +0000 (0:00:20.611) 0:02:33.482 *********** 2025-06-02 13:25:26.025417 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.025426 | orchestrator | 2025-06-02 13:25:26.025436 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 13:25:26.025445 | orchestrator | 2025-06-02 13:25:26.025455 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 13:25:26.025464 | orchestrator | Monday 02 June 2025 13:24:53 +0000 (0:00:02.617) 0:02:36.100 *********** 2025-06-02 13:25:26.025474 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.025484 | orchestrator | 2025-06-02 13:25:26.025493 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 13:25:26.025503 | orchestrator | Monday 02 June 2025 13:25:10 +0000 (0:00:16.613) 0:02:52.713 *********** 2025-06-02 13:25:26.025512 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.025522 | orchestrator | 2025-06-02 13:25:26.025531 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 13:25:26.025541 | orchestrator | Monday 02 June 2025 13:25:10 +0000 (0:00:00.534) 0:02:53.248 *********** 2025-06-02 13:25:26.025551 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.025560 | orchestrator | 2025-06-02 13:25:26.025570 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 13:25:26.025579 | orchestrator | 2025-06-02 13:25:26.025589 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 13:25:26.025604 | orchestrator | Monday 02 June 2025 13:25:12 +0000 (0:00:02.340) 0:02:55.588 *********** 2025-06-02 13:25:26.025614 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:25:26.025623 | orchestrator | 2025-06-02 13:25:26.025633 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-02 13:25:26.025643 | orchestrator | Monday 02 June 2025 13:25:13 +0000 (0:00:00.490) 0:02:56.078 *********** 2025-06-02 13:25:26.025652 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.025662 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.025671 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.025681 | orchestrator | 2025-06-02 13:25:26.025691 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-02 13:25:26.025706 | orchestrator | Monday 02 June 2025 13:25:15 +0000 (0:00:02.270) 0:02:58.349 *********** 2025-06-02 13:25:26.025715 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.025725 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.025734 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.025744 | orchestrator | 2025-06-02 13:25:26.025753 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-02 13:25:26.025763 | orchestrator | Monday 02 June 2025 13:25:17 +0000 (0:00:02.109) 0:03:00.458 *********** 2025-06-02 13:25:26.025772 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.025782 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.025791 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.025801 | orchestrator | 2025-06-02 13:25:26.025810 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-02 13:25:26.025820 | orchestrator | Monday 02 June 2025 13:25:19 +0000 (0:00:02.028) 0:03:02.487 *********** 2025-06-02 13:25:26.025829 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.025839 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.025849 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:25:26.025858 | orchestrator | 2025-06-02 13:25:26.025868 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-02 13:25:26.025877 | orchestrator | Monday 02 June 2025 13:25:21 +0000 (0:00:02.078) 0:03:04.566 *********** 2025-06-02 13:25:26.025887 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:25:26.025899 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:25:26.025916 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:25:26.025934 | orchestrator | 2025-06-02 13:25:26.025951 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 13:25:26.025966 | orchestrator | Monday 02 June 2025 13:25:24 +0000 (0:00:02.999) 0:03:07.565 *********** 2025-06-02 13:25:26.025984 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:25:26.026000 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:25:26.026050 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:25:26.026070 | orchestrator | 2025-06-02 13:25:26.026086 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:25:26.026103 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 13:25:26.026148 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-02 13:25:26.026167 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 13:25:26.026183 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 13:25:26.026197 | orchestrator | 2025-06-02 13:25:26.026207 | orchestrator | 2025-06-02 13:25:26.026217 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:25:26.026226 | orchestrator | Monday 02 June 2025 13:25:25 +0000 (0:00:00.224) 0:03:07.790 *********** 2025-06-02 13:25:26.026236 | orchestrator | =============================================================================== 2025-06-02 13:25:26.026245 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.22s 2025-06-02 13:25:26.026255 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.12s 2025-06-02 13:25:26.026272 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.61s 2025-06-02 13:25:26.026283 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.82s 2025-06-02 13:25:26.026292 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.79s 2025-06-02 13:25:26.026302 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.77s 2025-06-02 13:25:26.026320 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.95s 2025-06-02 13:25:26.026330 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.15s 2025-06-02 13:25:26.026339 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.06s 2025-06-02 13:25:26.026349 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.81s 2025-06-02 13:25:26.026358 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.64s 2025-06-02 13:25:26.026368 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.00s 2025-06-02 13:25:26.026377 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2025-06-02 13:25:26.026386 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.82s 2025-06-02 13:25:26.026396 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.58s 2025-06-02 13:25:26.026405 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.43s 2025-06-02 13:25:26.026421 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.34s 2025-06-02 13:25:26.026431 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.27s 2025-06-02 13:25:26.026440 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.12s 2025-06-02 13:25:26.026449 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.11s 2025-06-02 13:25:26.026459 | orchestrator | 2025-06-02 13:25:26 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:26.026469 | orchestrator | 2025-06-02 13:25:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:29.074744 | orchestrator | 2025-06-02 13:25:29 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:29.076263 | orchestrator | 2025-06-02 13:25:29 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:29.078359 | orchestrator | 2025-06-02 13:25:29 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:29.078389 | orchestrator | 2025-06-02 13:25:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:32.116294 | orchestrator | 2025-06-02 13:25:32 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:32.117824 | orchestrator | 2025-06-02 13:25:32 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:32.120321 | orchestrator | 2025-06-02 13:25:32 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:32.120355 | orchestrator | 2025-06-02 13:25:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:35.161358 | orchestrator | 2025-06-02 13:25:35 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:35.164194 | orchestrator | 2025-06-02 13:25:35 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:35.168599 | orchestrator | 2025-06-02 13:25:35 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:35.168631 | orchestrator | 2025-06-02 13:25:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:38.209892 | orchestrator | 2025-06-02 13:25:38 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:38.211350 | orchestrator | 2025-06-02 13:25:38 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:38.212620 | orchestrator | 2025-06-02 13:25:38 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:38.212735 | orchestrator | 2025-06-02 13:25:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:41.255338 | orchestrator | 2025-06-02 13:25:41 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:41.257883 | orchestrator | 2025-06-02 13:25:41 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:41.259209 | orchestrator | 2025-06-02 13:25:41 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:41.259240 | orchestrator | 2025-06-02 13:25:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:44.306602 | orchestrator | 2025-06-02 13:25:44 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:44.307881 | orchestrator | 2025-06-02 13:25:44 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:44.309112 | orchestrator | 2025-06-02 13:25:44 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:44.309159 | orchestrator | 2025-06-02 13:25:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:47.338458 | orchestrator | 2025-06-02 13:25:47 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:47.342069 | orchestrator | 2025-06-02 13:25:47 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:47.343440 | orchestrator | 2025-06-02 13:25:47 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:47.343645 | orchestrator | 2025-06-02 13:25:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:50.391060 | orchestrator | 2025-06-02 13:25:50 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:50.391577 | orchestrator | 2025-06-02 13:25:50 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:50.392613 | orchestrator | 2025-06-02 13:25:50 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:50.392659 | orchestrator | 2025-06-02 13:25:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:53.433978 | orchestrator | 2025-06-02 13:25:53 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:53.434206 | orchestrator | 2025-06-02 13:25:53 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:53.434225 | orchestrator | 2025-06-02 13:25:53 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:53.434237 | orchestrator | 2025-06-02 13:25:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:56.482484 | orchestrator | 2025-06-02 13:25:56 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:56.482594 | orchestrator | 2025-06-02 13:25:56 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:56.483603 | orchestrator | 2025-06-02 13:25:56 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:56.483627 | orchestrator | 2025-06-02 13:25:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:25:59.541077 | orchestrator | 2025-06-02 13:25:59 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:25:59.542984 | orchestrator | 2025-06-02 13:25:59 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:25:59.545101 | orchestrator | 2025-06-02 13:25:59 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:25:59.545404 | orchestrator | 2025-06-02 13:25:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:02.607035 | orchestrator | 2025-06-02 13:26:02 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:02.609731 | orchestrator | 2025-06-02 13:26:02 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:02.613520 | orchestrator | 2025-06-02 13:26:02 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:02.613596 | orchestrator | 2025-06-02 13:26:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:05.673206 | orchestrator | 2025-06-02 13:26:05 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:05.675219 | orchestrator | 2025-06-02 13:26:05 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:05.676563 | orchestrator | 2025-06-02 13:26:05 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:05.676596 | orchestrator | 2025-06-02 13:26:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:08.721801 | orchestrator | 2025-06-02 13:26:08 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:08.724023 | orchestrator | 2025-06-02 13:26:08 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:08.727033 | orchestrator | 2025-06-02 13:26:08 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:08.727069 | orchestrator | 2025-06-02 13:26:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:11.775279 | orchestrator | 2025-06-02 13:26:11 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:11.776729 | orchestrator | 2025-06-02 13:26:11 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:11.777192 | orchestrator | 2025-06-02 13:26:11 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:11.777215 | orchestrator | 2025-06-02 13:26:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:14.828352 | orchestrator | 2025-06-02 13:26:14 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:14.829850 | orchestrator | 2025-06-02 13:26:14 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:14.831333 | orchestrator | 2025-06-02 13:26:14 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:14.831362 | orchestrator | 2025-06-02 13:26:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:17.877303 | orchestrator | 2025-06-02 13:26:17 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:17.877851 | orchestrator | 2025-06-02 13:26:17 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:17.879191 | orchestrator | 2025-06-02 13:26:17 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:17.879239 | orchestrator | 2025-06-02 13:26:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:20.927715 | orchestrator | 2025-06-02 13:26:20 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:20.929442 | orchestrator | 2025-06-02 13:26:20 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:20.931559 | orchestrator | 2025-06-02 13:26:20 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:20.931775 | orchestrator | 2025-06-02 13:26:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:23.985636 | orchestrator | 2025-06-02 13:26:23 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:23.985768 | orchestrator | 2025-06-02 13:26:23 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state STARTED 2025-06-02 13:26:23.985784 | orchestrator | 2025-06-02 13:26:23 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:23.985796 | orchestrator | 2025-06-02 13:26:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:27.029108 | orchestrator | 2025-06-02 13:26:27 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:27.033261 | orchestrator | 2025-06-02 13:26:27 | INFO  | Task 470a001a-18b0-4b3b-a719-908627951dd3 is in state SUCCESS 2025-06-02 13:26:27.036770 | orchestrator | 2025-06-02 13:26:27.036829 | orchestrator | 2025-06-02 13:26:27.036848 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-02 13:26:27.036863 | orchestrator | 2025-06-02 13:26:27.036878 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 13:26:27.036894 | orchestrator | Monday 02 June 2025 13:24:20 +0000 (0:00:00.531) 0:00:00.531 *********** 2025-06-02 13:26:27.036909 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:26:27.036925 | orchestrator | 2025-06-02 13:26:27.036940 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 13:26:27.036954 | orchestrator | Monday 02 June 2025 13:24:21 +0000 (0:00:00.506) 0:00:01.037 *********** 2025-06-02 13:26:27.036969 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.036986 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037001 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037015 | orchestrator | 2025-06-02 13:26:27.037030 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 13:26:27.037045 | orchestrator | Monday 02 June 2025 13:24:22 +0000 (0:00:00.581) 0:00:01.619 *********** 2025-06-02 13:26:27.037427 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037441 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.037449 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037459 | orchestrator | 2025-06-02 13:26:27.037468 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 13:26:27.037477 | orchestrator | Monday 02 June 2025 13:24:22 +0000 (0:00:00.314) 0:00:01.933 *********** 2025-06-02 13:26:27.037486 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037495 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.037503 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037512 | orchestrator | 2025-06-02 13:26:27.037521 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 13:26:27.037529 | orchestrator | Monday 02 June 2025 13:24:23 +0000 (0:00:00.729) 0:00:02.663 *********** 2025-06-02 13:26:27.037538 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037546 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.037555 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037563 | orchestrator | 2025-06-02 13:26:27.037572 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 13:26:27.037581 | orchestrator | Monday 02 June 2025 13:24:23 +0000 (0:00:00.291) 0:00:02.954 *********** 2025-06-02 13:26:27.037589 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037598 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.037606 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037615 | orchestrator | 2025-06-02 13:26:27.037623 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 13:26:27.037632 | orchestrator | Monday 02 June 2025 13:24:23 +0000 (0:00:00.302) 0:00:03.256 *********** 2025-06-02 13:26:27.037641 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037649 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.037658 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037666 | orchestrator | 2025-06-02 13:26:27.037675 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 13:26:27.037684 | orchestrator | Monday 02 June 2025 13:24:23 +0000 (0:00:00.297) 0:00:03.554 *********** 2025-06-02 13:26:27.037718 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.037728 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.037736 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.037744 | orchestrator | 2025-06-02 13:26:27.037753 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 13:26:27.037762 | orchestrator | Monday 02 June 2025 13:24:24 +0000 (0:00:00.497) 0:00:04.052 *********** 2025-06-02 13:26:27.037770 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037779 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.037787 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037796 | orchestrator | 2025-06-02 13:26:27.037804 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 13:26:27.037813 | orchestrator | Monday 02 June 2025 13:24:24 +0000 (0:00:00.304) 0:00:04.356 *********** 2025-06-02 13:26:27.037821 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 13:26:27.037830 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:26:27.037839 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:26:27.037847 | orchestrator | 2025-06-02 13:26:27.037868 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 13:26:27.037877 | orchestrator | Monday 02 June 2025 13:24:25 +0000 (0:00:00.656) 0:00:05.013 *********** 2025-06-02 13:26:27.037888 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.037903 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.037917 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.037931 | orchestrator | 2025-06-02 13:26:27.037945 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 13:26:27.037959 | orchestrator | Monday 02 June 2025 13:24:25 +0000 (0:00:00.445) 0:00:05.458 *********** 2025-06-02 13:26:27.037973 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 13:26:27.037990 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:26:27.038005 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:26:27.038067 | orchestrator | 2025-06-02 13:26:27.038081 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 13:26:27.038091 | orchestrator | Monday 02 June 2025 13:24:28 +0000 (0:00:02.180) 0:00:07.639 *********** 2025-06-02 13:26:27.038101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 13:26:27.038111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 13:26:27.038121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 13:26:27.038131 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038141 | orchestrator | 2025-06-02 13:26:27.038186 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 13:26:27.038244 | orchestrator | Monday 02 June 2025 13:24:28 +0000 (0:00:00.389) 0:00:08.028 *********** 2025-06-02 13:26:27.038258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.038272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.038283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.038294 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038315 | orchestrator | 2025-06-02 13:26:27.038325 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 13:26:27.038335 | orchestrator | Monday 02 June 2025 13:24:29 +0000 (0:00:00.891) 0:00:08.920 *********** 2025-06-02 13:26:27.038347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.038360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.038371 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.038382 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038391 | orchestrator | 2025-06-02 13:26:27.038400 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 13:26:27.038408 | orchestrator | Monday 02 June 2025 13:24:29 +0000 (0:00:00.177) 0:00:09.097 *********** 2025-06-02 13:26:27.038426 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '09553a5d9c69', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 13:24:26.503332', 'end': '2025-06-02 13:24:26.546432', 'delta': '0:00:00.043100', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['09553a5d9c69'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-02 13:26:27.038440 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '662c67a726da', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 13:24:27.349447', 'end': '2025-06-02 13:24:27.395150', 'delta': '0:00:00.045703', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['662c67a726da'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-02 13:26:27.038478 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ff5a2b847696', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 13:24:27.863220', 'end': '2025-06-02 13:24:27.898453', 'delta': '0:00:00.035233', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ff5a2b847696'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-02 13:26:27.038495 | orchestrator | 2025-06-02 13:26:27.038504 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 13:26:27.038513 | orchestrator | Monday 02 June 2025 13:24:29 +0000 (0:00:00.456) 0:00:09.554 *********** 2025-06-02 13:26:27.038522 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.038530 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.038539 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.038547 | orchestrator | 2025-06-02 13:26:27.038556 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 13:26:27.038565 | orchestrator | Monday 02 June 2025 13:24:30 +0000 (0:00:00.462) 0:00:10.016 *********** 2025-06-02 13:26:27.038573 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-02 13:26:27.038582 | orchestrator | 2025-06-02 13:26:27.038590 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 13:26:27.038599 | orchestrator | Monday 02 June 2025 13:24:32 +0000 (0:00:01.721) 0:00:11.737 *********** 2025-06-02 13:26:27.038607 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038616 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.038624 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.038633 | orchestrator | 2025-06-02 13:26:27.038641 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 13:26:27.038650 | orchestrator | Monday 02 June 2025 13:24:32 +0000 (0:00:00.288) 0:00:12.026 *********** 2025-06-02 13:26:27.038658 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038667 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.038675 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.038684 | orchestrator | 2025-06-02 13:26:27.038695 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 13:26:27.038711 | orchestrator | Monday 02 June 2025 13:24:32 +0000 (0:00:00.479) 0:00:12.505 *********** 2025-06-02 13:26:27.038725 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038740 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.038753 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.038769 | orchestrator | 2025-06-02 13:26:27.038784 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 13:26:27.038800 | orchestrator | Monday 02 June 2025 13:24:33 +0000 (0:00:00.482) 0:00:12.988 *********** 2025-06-02 13:26:27.038817 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.038831 | orchestrator | 2025-06-02 13:26:27.038848 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 13:26:27.038863 | orchestrator | Monday 02 June 2025 13:24:33 +0000 (0:00:00.134) 0:00:13.123 *********** 2025-06-02 13:26:27.038878 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038892 | orchestrator | 2025-06-02 13:26:27.038907 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 13:26:27.038922 | orchestrator | Monday 02 June 2025 13:24:33 +0000 (0:00:00.241) 0:00:13.364 *********** 2025-06-02 13:26:27.038938 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.038954 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.038971 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.038987 | orchestrator | 2025-06-02 13:26:27.039003 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 13:26:27.039020 | orchestrator | Monday 02 June 2025 13:24:34 +0000 (0:00:00.291) 0:00:13.656 *********** 2025-06-02 13:26:27.039035 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.039051 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.039066 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.039082 | orchestrator | 2025-06-02 13:26:27.039107 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 13:26:27.039123 | orchestrator | Monday 02 June 2025 13:24:34 +0000 (0:00:00.320) 0:00:13.976 *********** 2025-06-02 13:26:27.039141 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.039150 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.039219 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.039228 | orchestrator | 2025-06-02 13:26:27.039237 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 13:26:27.039246 | orchestrator | Monday 02 June 2025 13:24:34 +0000 (0:00:00.530) 0:00:14.506 *********** 2025-06-02 13:26:27.039315 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.039324 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.039332 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.039341 | orchestrator | 2025-06-02 13:26:27.039350 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 13:26:27.039359 | orchestrator | Monday 02 June 2025 13:24:35 +0000 (0:00:00.305) 0:00:14.812 *********** 2025-06-02 13:26:27.039367 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.039376 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.039384 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.039393 | orchestrator | 2025-06-02 13:26:27.039401 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 13:26:27.039410 | orchestrator | Monday 02 June 2025 13:24:35 +0000 (0:00:00.313) 0:00:15.126 *********** 2025-06-02 13:26:27.039418 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.039427 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.039435 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.039444 | orchestrator | 2025-06-02 13:26:27.039452 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 13:26:27.039511 | orchestrator | Monday 02 June 2025 13:24:35 +0000 (0:00:00.313) 0:00:15.440 *********** 2025-06-02 13:26:27.039528 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.039542 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.039556 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.039570 | orchestrator | 2025-06-02 13:26:27.039584 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 13:26:27.039599 | orchestrator | Monday 02 June 2025 13:24:36 +0000 (0:00:00.468) 0:00:15.908 *********** 2025-06-02 13:26:27.039615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e', 'dm-uuid-LVM-FYOmAn8QQ4CpCK3nPuhb6AKp6cUG7OV8xhMDe9YbSZKR2ADWyLVKfEDPeTu0i5VR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb', 'dm-uuid-LVM-FdX4Vib8EEVapD4QYvfDgclCWcH1oEGiwuHzUUFwvvQOyEvbkIpXVImafRc4ZJhm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e', 'dm-uuid-LVM-y5FVoJTWgnJjFFZL5Fwl5aMePA5QPqnhu1pcSxeWHi1uuDJvys6lSBv8CRPykUhf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.039914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.039989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e', 'dm-uuid-LVM-Rc0H5YoQMSbBO16r3zMWpGs0vLha2ANBHy7mua31QpB0Yg06fo9xDfXk9G0JGbgL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vXJrzG-5k29-eh0q-cywh-xAYl-Ab0M-C3cLoz', 'scsi-0QEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff', 'scsi-SQEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040040 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HWiNin-26Hq-V0s1-5H3G-Gy5a-yJiA-NlxMLB', 'scsi-0QEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5', 'scsi-SQEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4', 'scsi-SQEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ml01Mm-Eihy-BhtQ-obSe-5JAz-Lx7n-weQK6q', 'scsi-0QEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27', 'scsi-SQEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JX0vgn-GkDc-wZzb-ThgS-dL5d-eR68-PmwQqJ', 'scsi-0QEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5', 'scsi-SQEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85', 'scsi-SQEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.040375 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.040384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c', 'dm-uuid-LVM-lBPkncHf05z5HtoBkcX1eg1pWuqTRftdQebFih2hGl3yDJNEoA7jtK3elwOXvHPl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e', 'dm-uuid-LVM-DgHct2KkEKM5qxlUlYXVA6wsYZuSilPpcv1aL2fQ0o39nUSiMJGAmAVSgIxcjGRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 13:26:27.040502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FN1XK3-XZ3w-OvDj-rY2x-MbI7-9UjC-5ttYQq', 'scsi-0QEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de', 'scsi-SQEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6qZlhN-Fcn9-PIrP-8M7s-Rgq5-4H2D-VocVNU', 'scsi-0QEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855', 'scsi-SQEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da', 'scsi-SQEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 13:26:27.040574 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.040583 | orchestrator | 2025-06-02 13:26:27.040592 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 13:26:27.040601 | orchestrator | Monday 02 June 2025 13:24:36 +0000 (0:00:00.614) 0:00:16.522 *********** 2025-06-02 13:26:27.040611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e', 'dm-uuid-LVM-FYOmAn8QQ4CpCK3nPuhb6AKp6cUG7OV8xhMDe9YbSZKR2ADWyLVKfEDPeTu0i5VR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040627 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb', 'dm-uuid-LVM-FdX4Vib8EEVapD4QYvfDgclCWcH1oEGiwuHzUUFwvvQOyEvbkIpXVImafRc4ZJhm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040734 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e', 'dm-uuid-LVM-y5FVoJTWgnJjFFZL5Fwl5aMePA5QPqnhu1pcSxeWHi1uuDJvys6lSBv8CRPykUhf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040750 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040765 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e', 'dm-uuid-LVM-Rc0H5YoQMSbBO16r3zMWpGs0vLha2ANBHy7mua31QpB0Yg06fo9xDfXk9G0JGbgL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040816 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16', 'scsi-SQEMU_QEMU_HARDDISK_575132ae-d287-41eb-83c3-e1274e2d90eb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040849 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--16065c32--ca37--5a4d--8ac9--40bfcb225d4e-osd--block--16065c32--ca37--5a4d--8ac9--40bfcb225d4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vXJrzG-5k29-eh0q-cywh-xAYl-Ab0M-C3cLoz', 'scsi-0QEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff', 'scsi-SQEMU_QEMU_HARDDISK_7282e12a-1e67-4050-babb-330e265d22ff'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb-osd--block--8c0a4a87--9c6a--5b65--b86e--eb950bafb2cb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HWiNin-26Hq-V0s1-5H3G-Gy5a-yJiA-NlxMLB', 'scsi-0QEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5', 'scsi-SQEMU_QEMU_HARDDISK_c0fd1d6c-13c9-49be-a163-e67d1493dfa5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040897 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4', 'scsi-SQEMU_QEMU_HARDDISK_a567a6c2-9a08-4ea9-919c-841e86dd2ba4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.040968 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.040994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16', 'scsi-SQEMU_QEMU_HARDDISK_2adf1974-ec50-45c6-b0e6-74793c3aa8fd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c', 'dm-uuid-LVM-lBPkncHf05z5HtoBkcX1eg1pWuqTRftdQebFih2hGl3yDJNEoA7jtK3elwOXvHPl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d6dea29--b52d--558c--8900--475fd450038e-osd--block--4d6dea29--b52d--558c--8900--475fd450038e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ml01Mm-Eihy-BhtQ-obSe-5JAz-Lx7n-weQK6q', 'scsi-0QEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27', 'scsi-SQEMU_QEMU_HARDDISK_62086343-a56e-4adf-83a5-5e585892be27'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e', 'dm-uuid-LVM-DgHct2KkEKM5qxlUlYXVA6wsYZuSilPpcv1aL2fQ0o39nUSiMJGAmAVSgIxcjGRg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--903578c2--c0cc--5204--b647--273ed346895e-osd--block--903578c2--c0cc--5204--b647--273ed346895e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JX0vgn-GkDc-wZzb-ThgS-dL5d-eR68-PmwQqJ', 'scsi-0QEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5', 'scsi-SQEMU_QEMU_HARDDISK_bc902884-47f1-4f9c-b2ed-b43aad7d55f5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041103 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85', 'scsi-SQEMU_QEMU_HARDDISK_fc1422f4-0fb2-4d6b-8db4-e968df408b85'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041135 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-36-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041198 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.041214 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041249 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041266 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041285 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041301 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041332 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4aa24e4c-05f0-4701-ac23-a15c2e9a093e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e284bd18--e265--58a5--a2ab--ec21b03cc36c-osd--block--e284bd18--e265--58a5--a2ab--ec21b03cc36c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FN1XK3-XZ3w-OvDj-rY2x-MbI7-9UjC-5ttYQq', 'scsi-0QEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de', 'scsi-SQEMU_QEMU_HARDDISK_9638a321-9046-4874-bf60-f81fe27729de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041359 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4e8c4e16--432b--566e--bc19--b5260bfeea4e-osd--block--4e8c4e16--432b--566e--bc19--b5260bfeea4e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6qZlhN-Fcn9-PIrP-8M7s-Rgq5-4H2D-VocVNU', 'scsi-0QEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855', 'scsi-SQEMU_QEMU_HARDDISK_f391f369-5642-40a7-8413-d92b55d55855'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041372 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da', 'scsi-SQEMU_QEMU_HARDDISK_21bce83c-356f-424b-8439-404f0c7bc2da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-12-35-58-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 13:26:27.041403 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.041412 | orchestrator | 2025-06-02 13:26:27.041421 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 13:26:27.041430 | orchestrator | Monday 02 June 2025 13:24:37 +0000 (0:00:00.584) 0:00:17.107 *********** 2025-06-02 13:26:27.041439 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.041448 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.041457 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.041466 | orchestrator | 2025-06-02 13:26:27.041474 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 13:26:27.041483 | orchestrator | Monday 02 June 2025 13:24:38 +0000 (0:00:00.658) 0:00:17.766 *********** 2025-06-02 13:26:27.041492 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.041500 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.041509 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.041518 | orchestrator | 2025-06-02 13:26:27.041526 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 13:26:27.041535 | orchestrator | Monday 02 June 2025 13:24:38 +0000 (0:00:00.439) 0:00:18.205 *********** 2025-06-02 13:26:27.041544 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.041552 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.041561 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.041569 | orchestrator | 2025-06-02 13:26:27.041578 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 13:26:27.041587 | orchestrator | Monday 02 June 2025 13:24:39 +0000 (0:00:00.657) 0:00:18.863 *********** 2025-06-02 13:26:27.041596 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.041604 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.041613 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.041622 | orchestrator | 2025-06-02 13:26:27.041631 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 13:26:27.041640 | orchestrator | Monday 02 June 2025 13:24:39 +0000 (0:00:00.311) 0:00:19.174 *********** 2025-06-02 13:26:27.041648 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.041657 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.041665 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.041674 | orchestrator | 2025-06-02 13:26:27.041683 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 13:26:27.041691 | orchestrator | Monday 02 June 2025 13:24:39 +0000 (0:00:00.382) 0:00:19.557 *********** 2025-06-02 13:26:27.041700 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.041709 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.041717 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.041726 | orchestrator | 2025-06-02 13:26:27.041735 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 13:26:27.041743 | orchestrator | Monday 02 June 2025 13:24:40 +0000 (0:00:00.449) 0:00:20.006 *********** 2025-06-02 13:26:27.041752 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 13:26:27.041761 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 13:26:27.041775 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 13:26:27.041783 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 13:26:27.041792 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 13:26:27.041801 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 13:26:27.041809 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 13:26:27.041818 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 13:26:27.041827 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 13:26:27.041835 | orchestrator | 2025-06-02 13:26:27.041844 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 13:26:27.041852 | orchestrator | Monday 02 June 2025 13:24:41 +0000 (0:00:00.831) 0:00:20.837 *********** 2025-06-02 13:26:27.041861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 13:26:27.041870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 13:26:27.041884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 13:26:27.041899 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.041913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 13:26:27.041927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 13:26:27.041941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 13:26:27.041955 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.041969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 13:26:27.041983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 13:26:27.041998 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 13:26:27.042012 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.042099 | orchestrator | 2025-06-02 13:26:27.042110 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 13:26:27.042119 | orchestrator | Monday 02 June 2025 13:24:41 +0000 (0:00:00.338) 0:00:21.175 *********** 2025-06-02 13:26:27.042128 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:26:27.042137 | orchestrator | 2025-06-02 13:26:27.042146 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 13:26:27.042180 | orchestrator | Monday 02 June 2025 13:24:42 +0000 (0:00:00.673) 0:00:21.849 *********** 2025-06-02 13:26:27.042190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.042199 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.042209 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.042223 | orchestrator | 2025-06-02 13:26:27.042242 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 13:26:27.042252 | orchestrator | Monday 02 June 2025 13:24:42 +0000 (0:00:00.329) 0:00:22.178 *********** 2025-06-02 13:26:27.042260 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.042269 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.042278 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.042287 | orchestrator | 2025-06-02 13:26:27.042295 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 13:26:27.042304 | orchestrator | Monday 02 June 2025 13:24:42 +0000 (0:00:00.322) 0:00:22.501 *********** 2025-06-02 13:26:27.042313 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.042325 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.042339 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:26:27.042353 | orchestrator | 2025-06-02 13:26:27.042367 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 13:26:27.042382 | orchestrator | Monday 02 June 2025 13:24:43 +0000 (0:00:00.327) 0:00:22.829 *********** 2025-06-02 13:26:27.042397 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.042409 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.042418 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.042436 | orchestrator | 2025-06-02 13:26:27.042444 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 13:26:27.042453 | orchestrator | Monday 02 June 2025 13:24:43 +0000 (0:00:00.753) 0:00:23.582 *********** 2025-06-02 13:26:27.042462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:26:27.042470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:26:27.042479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:26:27.042487 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.042496 | orchestrator | 2025-06-02 13:26:27.042504 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 13:26:27.042513 | orchestrator | Monday 02 June 2025 13:24:44 +0000 (0:00:00.367) 0:00:23.949 *********** 2025-06-02 13:26:27.042521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:26:27.042530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:26:27.042538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:26:27.042547 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.042555 | orchestrator | 2025-06-02 13:26:27.042564 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 13:26:27.042572 | orchestrator | Monday 02 June 2025 13:24:44 +0000 (0:00:00.394) 0:00:24.344 *********** 2025-06-02 13:26:27.042581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 13:26:27.042589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 13:26:27.042597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 13:26:27.042606 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.042615 | orchestrator | 2025-06-02 13:26:27.042623 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 13:26:27.042632 | orchestrator | Monday 02 June 2025 13:24:45 +0000 (0:00:00.375) 0:00:24.719 *********** 2025-06-02 13:26:27.042640 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:26:27.042649 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:26:27.042657 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:26:27.042666 | orchestrator | 2025-06-02 13:26:27.042674 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 13:26:27.042683 | orchestrator | Monday 02 June 2025 13:24:45 +0000 (0:00:00.324) 0:00:25.044 *********** 2025-06-02 13:26:27.042691 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 13:26:27.042700 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 13:26:27.042709 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 13:26:27.042717 | orchestrator | 2025-06-02 13:26:27.042726 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 13:26:27.042734 | orchestrator | Monday 02 June 2025 13:24:45 +0000 (0:00:00.516) 0:00:25.561 *********** 2025-06-02 13:26:27.042743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 13:26:27.042752 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:26:27.042766 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:26:27.042775 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 13:26:27.042784 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 13:26:27.042792 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 13:26:27.042801 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 13:26:27.042809 | orchestrator | 2025-06-02 13:26:27.042818 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 13:26:27.042826 | orchestrator | Monday 02 June 2025 13:24:46 +0000 (0:00:00.918) 0:00:26.479 *********** 2025-06-02 13:26:27.042835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 13:26:27.042853 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 13:26:27.042862 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 13:26:27.042870 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 13:26:27.042879 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 13:26:27.042887 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 13:26:27.042896 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 13:26:27.042904 | orchestrator | 2025-06-02 13:26:27.042918 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-02 13:26:27.042927 | orchestrator | Monday 02 June 2025 13:24:48 +0000 (0:00:01.893) 0:00:28.373 *********** 2025-06-02 13:26:27.042935 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:26:27.042944 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:26:27.042959 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-02 13:26:27.042970 | orchestrator | 2025-06-02 13:26:27.042979 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-02 13:26:27.042988 | orchestrator | Monday 02 June 2025 13:24:49 +0000 (0:00:00.379) 0:00:28.752 *********** 2025-06-02 13:26:27.042997 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 13:26:27.043007 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 13:26:27.043016 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 13:26:27.043025 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 13:26:27.043034 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 13:26:27.043043 | orchestrator | 2025-06-02 13:26:27.043052 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-02 13:26:27.043061 | orchestrator | Monday 02 June 2025 13:25:33 +0000 (0:00:44.114) 0:01:12.866 *********** 2025-06-02 13:26:27.043069 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043078 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043086 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043095 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043103 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043111 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043120 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-02 13:26:27.043134 | orchestrator | 2025-06-02 13:26:27.043142 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-02 13:26:27.043151 | orchestrator | Monday 02 June 2025 13:25:56 +0000 (0:00:23.062) 0:01:35.929 *********** 2025-06-02 13:26:27.043232 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043246 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043262 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043277 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043292 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043307 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043320 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 13:26:27.043328 | orchestrator | 2025-06-02 13:26:27.043337 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-02 13:26:27.043346 | orchestrator | Monday 02 June 2025 13:26:08 +0000 (0:00:12.014) 0:01:47.944 *********** 2025-06-02 13:26:27.043354 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043363 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:26:27.043371 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:26:27.043380 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043388 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:26:27.043397 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:26:27.043413 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043422 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:26:27.043431 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:26:27.043439 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043508 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:26:27.043520 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:26:27.043528 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043537 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:26:27.043546 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:26:27.043554 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 13:26:27.043563 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 13:26:27.043572 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 13:26:27.043581 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-02 13:26:27.043589 | orchestrator | 2025-06-02 13:26:27.043598 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:26:27.043607 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-02 13:26:27.043617 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 13:26:27.043626 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 13:26:27.043644 | orchestrator | 2025-06-02 13:26:27.043653 | orchestrator | 2025-06-02 13:26:27.043662 | orchestrator | 2025-06-02 13:26:27.043670 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:26:27.043679 | orchestrator | Monday 02 June 2025 13:26:25 +0000 (0:00:17.292) 0:02:05.236 *********** 2025-06-02 13:26:27.043688 | orchestrator | =============================================================================== 2025-06-02 13:26:27.043696 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.11s 2025-06-02 13:26:27.043705 | orchestrator | generate keys ---------------------------------------------------------- 23.06s 2025-06-02 13:26:27.043714 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.29s 2025-06-02 13:26:27.043723 | orchestrator | get keys from monitors ------------------------------------------------- 12.01s 2025-06-02 13:26:27.043731 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.18s 2025-06-02 13:26:27.043739 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.89s 2025-06-02 13:26:27.043747 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.72s 2025-06-02 13:26:27.043754 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2025-06-02 13:26:27.043763 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.89s 2025-06-02 13:26:27.043770 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2025-06-02 13:26:27.043778 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.75s 2025-06-02 13:26:27.043786 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2025-06-02 13:26:27.043794 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-06-02 13:26:27.043806 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2025-06-02 13:26:27.043814 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-06-02 13:26:27.043822 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2025-06-02 13:26:27.043830 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.61s 2025-06-02 13:26:27.043838 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-06-02 13:26:27.043846 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.58s 2025-06-02 13:26:27.043853 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.53s 2025-06-02 13:26:27.043861 | orchestrator | 2025-06-02 13:26:27 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:27.043869 | orchestrator | 2025-06-02 13:26:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:30.086828 | orchestrator | 2025-06-02 13:26:30 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:30.088335 | orchestrator | 2025-06-02 13:26:30 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:30.089952 | orchestrator | 2025-06-02 13:26:30 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:30.089979 | orchestrator | 2025-06-02 13:26:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:33.133466 | orchestrator | 2025-06-02 13:26:33 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:33.134763 | orchestrator | 2025-06-02 13:26:33 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:33.136419 | orchestrator | 2025-06-02 13:26:33 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:33.136470 | orchestrator | 2025-06-02 13:26:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:36.185652 | orchestrator | 2025-06-02 13:26:36 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:36.186700 | orchestrator | 2025-06-02 13:26:36 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:36.189030 | orchestrator | 2025-06-02 13:26:36 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:36.189114 | orchestrator | 2025-06-02 13:26:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:39.245872 | orchestrator | 2025-06-02 13:26:39 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:39.246592 | orchestrator | 2025-06-02 13:26:39 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:39.250900 | orchestrator | 2025-06-02 13:26:39 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:39.250946 | orchestrator | 2025-06-02 13:26:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:42.293797 | orchestrator | 2025-06-02 13:26:42 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:42.298749 | orchestrator | 2025-06-02 13:26:42 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:42.300312 | orchestrator | 2025-06-02 13:26:42 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:42.300749 | orchestrator | 2025-06-02 13:26:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:45.364080 | orchestrator | 2025-06-02 13:26:45 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:45.370913 | orchestrator | 2025-06-02 13:26:45 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:45.373548 | orchestrator | 2025-06-02 13:26:45 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:45.373898 | orchestrator | 2025-06-02 13:26:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:48.415618 | orchestrator | 2025-06-02 13:26:48 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:48.416792 | orchestrator | 2025-06-02 13:26:48 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:48.418085 | orchestrator | 2025-06-02 13:26:48 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:48.418269 | orchestrator | 2025-06-02 13:26:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:51.458300 | orchestrator | 2025-06-02 13:26:51 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:51.458738 | orchestrator | 2025-06-02 13:26:51 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state STARTED 2025-06-02 13:26:51.460557 | orchestrator | 2025-06-02 13:26:51 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:51.461431 | orchestrator | 2025-06-02 13:26:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:54.512847 | orchestrator | 2025-06-02 13:26:54 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:54.512931 | orchestrator | 2025-06-02 13:26:54 | INFO  | Task 80f2ada7-bdec-4e1f-b153-0322f9116dbf is in state SUCCESS 2025-06-02 13:26:54.514595 | orchestrator | 2025-06-02 13:26:54 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:54.515117 | orchestrator | 2025-06-02 13:26:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:26:57.564555 | orchestrator | 2025-06-02 13:26:57 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:26:57.565627 | orchestrator | 2025-06-02 13:26:57 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:26:57.567626 | orchestrator | 2025-06-02 13:26:57 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:26:57.567661 | orchestrator | 2025-06-02 13:26:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:00.617274 | orchestrator | 2025-06-02 13:27:00 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:00.618862 | orchestrator | 2025-06-02 13:27:00 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:27:00.620454 | orchestrator | 2025-06-02 13:27:00 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:00.620482 | orchestrator | 2025-06-02 13:27:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:03.666382 | orchestrator | 2025-06-02 13:27:03 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:03.671060 | orchestrator | 2025-06-02 13:27:03 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:27:03.671527 | orchestrator | 2025-06-02 13:27:03 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:03.671560 | orchestrator | 2025-06-02 13:27:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:06.720539 | orchestrator | 2025-06-02 13:27:06 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:06.725510 | orchestrator | 2025-06-02 13:27:06 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state STARTED 2025-06-02 13:27:06.726418 | orchestrator | 2025-06-02 13:27:06 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:06.726959 | orchestrator | 2025-06-02 13:27:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:09.764688 | orchestrator | 2025-06-02 13:27:09 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:09.766482 | orchestrator | 2025-06-02 13:27:09 | INFO  | Task 86ea526e-815d-46d4-a121-445c3589100b is in state SUCCESS 2025-06-02 13:27:09.768071 | orchestrator | 2025-06-02 13:27:09.768158 | orchestrator | 2025-06-02 13:27:09.768171 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-02 13:27:09.768269 | orchestrator | 2025-06-02 13:27:09.768284 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-02 13:27:09.768296 | orchestrator | Monday 02 June 2025 13:26:29 +0000 (0:00:00.110) 0:00:00.110 *********** 2025-06-02 13:27:09.768307 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-02 13:27:09.768569 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.768586 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.768598 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 13:27:09.768609 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.768620 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-02 13:27:09.768631 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-02 13:27:09.768642 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-02 13:27:09.768653 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-02 13:27:09.768664 | orchestrator | 2025-06-02 13:27:09.768812 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-02 13:27:09.768860 | orchestrator | Monday 02 June 2025 13:26:33 +0000 (0:00:03.932) 0:00:04.043 *********** 2025-06-02 13:27:09.768874 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 13:27:09.768885 | orchestrator | 2025-06-02 13:27:09.768897 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-02 13:27:09.768907 | orchestrator | Monday 02 June 2025 13:26:34 +0000 (0:00:00.866) 0:00:04.910 *********** 2025-06-02 13:27:09.768918 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-02 13:27:09.768930 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.768940 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.768951 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 13:27:09.768962 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.768972 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-02 13:27:09.768983 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-02 13:27:09.768993 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-02 13:27:09.769004 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-02 13:27:09.769015 | orchestrator | 2025-06-02 13:27:09.769025 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-02 13:27:09.769048 | orchestrator | Monday 02 June 2025 13:26:47 +0000 (0:00:12.797) 0:00:17.707 *********** 2025-06-02 13:27:09.769059 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-02 13:27:09.769070 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.769080 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.769091 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 13:27:09.769102 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 13:27:09.769112 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-02 13:27:09.769123 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-02 13:27:09.769134 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-02 13:27:09.769145 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-02 13:27:09.769156 | orchestrator | 2025-06-02 13:27:09.769179 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:27:09.769215 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:27:09.769229 | orchestrator | 2025-06-02 13:27:09.769242 | orchestrator | 2025-06-02 13:27:09.769255 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:27:09.769267 | orchestrator | Monday 02 June 2025 13:26:53 +0000 (0:00:06.438) 0:00:24.146 *********** 2025-06-02 13:27:09.769281 | orchestrator | =============================================================================== 2025-06-02 13:27:09.769294 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.80s 2025-06-02 13:27:09.769306 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.44s 2025-06-02 13:27:09.769318 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.93s 2025-06-02 13:27:09.769331 | orchestrator | Create share directory -------------------------------------------------- 0.87s 2025-06-02 13:27:09.769343 | orchestrator | 2025-06-02 13:27:09.769355 | orchestrator | 2025-06-02 13:27:09.769368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:27:09.769390 | orchestrator | 2025-06-02 13:27:09.769416 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:27:09.769430 | orchestrator | Monday 02 June 2025 13:25:29 +0000 (0:00:00.248) 0:00:00.248 *********** 2025-06-02 13:27:09.769443 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.769456 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.769468 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.769480 | orchestrator | 2025-06-02 13:27:09.769492 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:27:09.769506 | orchestrator | Monday 02 June 2025 13:25:29 +0000 (0:00:00.217) 0:00:00.465 *********** 2025-06-02 13:27:09.769519 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-02 13:27:09.769532 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-02 13:27:09.769543 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-02 13:27:09.769554 | orchestrator | 2025-06-02 13:27:09.769565 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-02 13:27:09.769575 | orchestrator | 2025-06-02 13:27:09.769586 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 13:27:09.769597 | orchestrator | Monday 02 June 2025 13:25:29 +0000 (0:00:00.319) 0:00:00.784 *********** 2025-06-02 13:27:09.769608 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:27:09.769618 | orchestrator | 2025-06-02 13:27:09.769629 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-02 13:27:09.769640 | orchestrator | Monday 02 June 2025 13:25:30 +0000 (0:00:00.365) 0:00:01.149 *********** 2025-06-02 13:27:09.769662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.769698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.769721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.769741 | orchestrator | 2025-06-02 13:27:09.769752 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-02 13:27:09.769763 | orchestrator | Monday 02 June 2025 13:25:31 +0000 (0:00:00.908) 0:00:02.058 *********** 2025-06-02 13:27:09.769774 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.769784 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.769795 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.769806 | orchestrator | 2025-06-02 13:27:09.769817 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 13:27:09.769827 | orchestrator | Monday 02 June 2025 13:25:31 +0000 (0:00:00.296) 0:00:02.354 *********** 2025-06-02 13:27:09.769838 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 13:27:09.769854 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 13:27:09.769866 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 13:27:09.769877 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 13:27:09.769887 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 13:27:09.769898 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 13:27:09.769909 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-02 13:27:09.769920 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 13:27:09.769930 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 13:27:09.769941 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 13:27:09.769952 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 13:27:09.769962 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 13:27:09.769973 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 13:27:09.769997 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 13:27:09.770008 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-02 13:27:09.770121 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 13:27:09.770143 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 13:27:09.770155 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 13:27:09.770208 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 13:27:09.770221 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 13:27:09.770232 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 13:27:09.770243 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 13:27:09.770254 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-02 13:27:09.770264 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 13:27:09.770276 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-02 13:27:09.770289 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-02 13:27:09.770300 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-02 13:27:09.770319 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-02 13:27:09.770330 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-02 13:27:09.770341 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-02 13:27:09.770351 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-02 13:27:09.770362 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-02 13:27:09.770373 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-02 13:27:09.770384 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-02 13:27:09.770395 | orchestrator | 2025-06-02 13:27:09.770406 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.770417 | orchestrator | Monday 02 June 2025 13:25:32 +0000 (0:00:00.584) 0:00:02.938 *********** 2025-06-02 13:27:09.770428 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.770438 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.770449 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.770460 | orchestrator | 2025-06-02 13:27:09.770471 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.770481 | orchestrator | Monday 02 June 2025 13:25:32 +0000 (0:00:00.258) 0:00:03.197 *********** 2025-06-02 13:27:09.770492 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.770503 | orchestrator | 2025-06-02 13:27:09.770523 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.770534 | orchestrator | Monday 02 June 2025 13:25:32 +0000 (0:00:00.095) 0:00:03.292 *********** 2025-06-02 13:27:09.770545 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.770555 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.770566 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.770577 | orchestrator | 2025-06-02 13:27:09.770587 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.770598 | orchestrator | Monday 02 June 2025 13:25:32 +0000 (0:00:00.330) 0:00:03.623 *********** 2025-06-02 13:27:09.770609 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.770619 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.770630 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.770641 | orchestrator | 2025-06-02 13:27:09.770666 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.770677 | orchestrator | Monday 02 June 2025 13:25:32 +0000 (0:00:00.238) 0:00:03.861 *********** 2025-06-02 13:27:09.770688 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.770699 | orchestrator | 2025-06-02 13:27:09.770709 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.770720 | orchestrator | Monday 02 June 2025 13:25:33 +0000 (0:00:00.115) 0:00:03.977 *********** 2025-06-02 13:27:09.770731 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.770741 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.770752 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.770763 | orchestrator | 2025-06-02 13:27:09.770773 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.770784 | orchestrator | Monday 02 June 2025 13:25:33 +0000 (0:00:00.241) 0:00:04.219 *********** 2025-06-02 13:27:09.770802 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.770813 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.770823 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.770834 | orchestrator | 2025-06-02 13:27:09.770850 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.770861 | orchestrator | Monday 02 June 2025 13:25:33 +0000 (0:00:00.262) 0:00:04.482 *********** 2025-06-02 13:27:09.770872 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.770883 | orchestrator | 2025-06-02 13:27:09.770894 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.770904 | orchestrator | Monday 02 June 2025 13:25:33 +0000 (0:00:00.216) 0:00:04.698 *********** 2025-06-02 13:27:09.770915 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.770925 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.770936 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.770947 | orchestrator | 2025-06-02 13:27:09.770957 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.770968 | orchestrator | Monday 02 June 2025 13:25:34 +0000 (0:00:00.273) 0:00:04.972 *********** 2025-06-02 13:27:09.770978 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.770989 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.771000 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.771011 | orchestrator | 2025-06-02 13:27:09.771022 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.771032 | orchestrator | Monday 02 June 2025 13:25:34 +0000 (0:00:00.244) 0:00:05.217 *********** 2025-06-02 13:27:09.771043 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771054 | orchestrator | 2025-06-02 13:27:09.771064 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.771075 | orchestrator | Monday 02 June 2025 13:25:34 +0000 (0:00:00.117) 0:00:05.334 *********** 2025-06-02 13:27:09.771086 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771096 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.771107 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.771117 | orchestrator | 2025-06-02 13:27:09.771128 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.771139 | orchestrator | Monday 02 June 2025 13:25:34 +0000 (0:00:00.238) 0:00:05.572 *********** 2025-06-02 13:27:09.771149 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.771160 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.771171 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.771198 | orchestrator | 2025-06-02 13:27:09.771210 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.771221 | orchestrator | Monday 02 June 2025 13:25:35 +0000 (0:00:00.332) 0:00:05.905 *********** 2025-06-02 13:27:09.771232 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771242 | orchestrator | 2025-06-02 13:27:09.771253 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.771264 | orchestrator | Monday 02 June 2025 13:25:35 +0000 (0:00:00.084) 0:00:05.989 *********** 2025-06-02 13:27:09.771278 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771297 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.771316 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.771334 | orchestrator | 2025-06-02 13:27:09.771354 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.771374 | orchestrator | Monday 02 June 2025 13:25:35 +0000 (0:00:00.223) 0:00:06.213 *********** 2025-06-02 13:27:09.771393 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.771405 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.771415 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.771426 | orchestrator | 2025-06-02 13:27:09.771437 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.771447 | orchestrator | Monday 02 June 2025 13:25:35 +0000 (0:00:00.255) 0:00:06.469 *********** 2025-06-02 13:27:09.771472 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771491 | orchestrator | 2025-06-02 13:27:09.771502 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.771513 | orchestrator | Monday 02 June 2025 13:25:35 +0000 (0:00:00.113) 0:00:06.583 *********** 2025-06-02 13:27:09.771523 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771534 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.771544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.771555 | orchestrator | 2025-06-02 13:27:09.771565 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.771576 | orchestrator | Monday 02 June 2025 13:25:36 +0000 (0:00:00.341) 0:00:06.924 *********** 2025-06-02 13:27:09.771587 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.771605 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.771616 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.771627 | orchestrator | 2025-06-02 13:27:09.771638 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.771648 | orchestrator | Monday 02 June 2025 13:25:36 +0000 (0:00:00.265) 0:00:07.190 *********** 2025-06-02 13:27:09.771659 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771670 | orchestrator | 2025-06-02 13:27:09.771680 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.771691 | orchestrator | Monday 02 June 2025 13:25:36 +0000 (0:00:00.111) 0:00:07.301 *********** 2025-06-02 13:27:09.771702 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771712 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.771723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.771733 | orchestrator | 2025-06-02 13:27:09.771744 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.771755 | orchestrator | Monday 02 June 2025 13:25:36 +0000 (0:00:00.259) 0:00:07.561 *********** 2025-06-02 13:27:09.771765 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.771776 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.771787 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.771797 | orchestrator | 2025-06-02 13:27:09.771808 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.771819 | orchestrator | Monday 02 June 2025 13:25:36 +0000 (0:00:00.262) 0:00:07.824 *********** 2025-06-02 13:27:09.771829 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771840 | orchestrator | 2025-06-02 13:27:09.771850 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.771861 | orchestrator | Monday 02 June 2025 13:25:37 +0000 (0:00:00.119) 0:00:07.944 *********** 2025-06-02 13:27:09.771872 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.771882 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.771893 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.771903 | orchestrator | 2025-06-02 13:27:09.771920 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.771931 | orchestrator | Monday 02 June 2025 13:25:37 +0000 (0:00:00.363) 0:00:08.307 *********** 2025-06-02 13:27:09.771941 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.771952 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.771963 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.771973 | orchestrator | 2025-06-02 13:27:09.771984 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.771995 | orchestrator | Monday 02 June 2025 13:25:37 +0000 (0:00:00.285) 0:00:08.593 *********** 2025-06-02 13:27:09.772006 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.772016 | orchestrator | 2025-06-02 13:27:09.772027 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.772038 | orchestrator | Monday 02 June 2025 13:25:37 +0000 (0:00:00.113) 0:00:08.706 *********** 2025-06-02 13:27:09.772048 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.772059 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.772069 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.772080 | orchestrator | 2025-06-02 13:27:09.772097 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 13:27:09.772108 | orchestrator | Monday 02 June 2025 13:25:38 +0000 (0:00:00.262) 0:00:08.969 *********** 2025-06-02 13:27:09.772118 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:27:09.772129 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:27:09.772140 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:27:09.772150 | orchestrator | 2025-06-02 13:27:09.772161 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 13:27:09.772171 | orchestrator | Monday 02 June 2025 13:25:38 +0000 (0:00:00.595) 0:00:09.564 *********** 2025-06-02 13:27:09.772240 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.772254 | orchestrator | 2025-06-02 13:27:09.772265 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 13:27:09.772276 | orchestrator | Monday 02 June 2025 13:25:38 +0000 (0:00:00.136) 0:00:09.700 *********** 2025-06-02 13:27:09.772286 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.772297 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.772308 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.772318 | orchestrator | 2025-06-02 13:27:09.772329 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-02 13:27:09.772339 | orchestrator | Monday 02 June 2025 13:25:39 +0000 (0:00:00.310) 0:00:10.011 *********** 2025-06-02 13:27:09.772350 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:27:09.772360 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:27:09.772371 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:27:09.772382 | orchestrator | 2025-06-02 13:27:09.772392 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-02 13:27:09.772403 | orchestrator | Monday 02 June 2025 13:25:40 +0000 (0:00:01.699) 0:00:11.711 *********** 2025-06-02 13:27:09.772414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 13:27:09.772425 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 13:27:09.772435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 13:27:09.772446 | orchestrator | 2025-06-02 13:27:09.772456 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-02 13:27:09.772467 | orchestrator | Monday 02 June 2025 13:25:42 +0000 (0:00:01.718) 0:00:13.429 *********** 2025-06-02 13:27:09.772478 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 13:27:09.772489 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 13:27:09.772499 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 13:27:09.772510 | orchestrator | 2025-06-02 13:27:09.772521 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-02 13:27:09.772538 | orchestrator | Monday 02 June 2025 13:25:44 +0000 (0:00:01.933) 0:00:15.363 *********** 2025-06-02 13:27:09.772549 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 13:27:09.772560 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 13:27:09.772570 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 13:27:09.772581 | orchestrator | 2025-06-02 13:27:09.772592 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-02 13:27:09.772602 | orchestrator | Monday 02 June 2025 13:25:45 +0000 (0:00:01.464) 0:00:16.828 *********** 2025-06-02 13:27:09.772613 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.772624 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.772634 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.772645 | orchestrator | 2025-06-02 13:27:09.772655 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-02 13:27:09.772716 | orchestrator | Monday 02 June 2025 13:25:46 +0000 (0:00:00.296) 0:00:17.124 *********** 2025-06-02 13:27:09.772728 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.772739 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.772750 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.772760 | orchestrator | 2025-06-02 13:27:09.772770 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 13:27:09.772780 | orchestrator | Monday 02 June 2025 13:25:46 +0000 (0:00:00.290) 0:00:17.415 *********** 2025-06-02 13:27:09.772789 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:27:09.772799 | orchestrator | 2025-06-02 13:27:09.772808 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-02 13:27:09.772823 | orchestrator | Monday 02 June 2025 13:25:47 +0000 (0:00:00.616) 0:00:18.031 *********** 2025-06-02 13:27:09.772835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.772862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.772880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.772891 | orchestrator | 2025-06-02 13:27:09.772900 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-02 13:27:09.772910 | orchestrator | Monday 02 June 2025 13:25:48 +0000 (0:00:01.531) 0:00:19.563 *********** 2025-06-02 13:27:09.772934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:27:09.772952 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.772968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:27:09.772985 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.773000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:27:09.773011 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.773021 | orchestrator | 2025-06-02 13:27:09.773031 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-02 13:27:09.773040 | orchestrator | Monday 02 June 2025 13:25:49 +0000 (0:00:00.703) 0:00:20.266 *********** 2025-06-02 13:27:09.773057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:27:09.773074 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.773090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:27:09.773100 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.773118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 13:27:09.773134 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.773143 | orchestrator | 2025-06-02 13:27:09.773153 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-02 13:27:09.773162 | orchestrator | Monday 02 June 2025 13:25:50 +0000 (0:00:01.037) 0:00:21.304 *********** 2025-06-02 13:27:09.773177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.773217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.773235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 13:27:09.773246 | orchestrator | 2025-06-02 13:27:09.773255 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 13:27:09.773276 | orchestrator | Monday 02 June 2025 13:25:51 +0000 (0:00:01.146) 0:00:22.450 *********** 2025-06-02 13:27:09.773286 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:27:09.773296 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:27:09.773305 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:27:09.773314 | orchestrator | 2025-06-02 13:27:09.773324 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 13:27:09.773333 | orchestrator | Monday 02 June 2025 13:25:51 +0000 (0:00:00.311) 0:00:22.762 *********** 2025-06-02 13:27:09.773348 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:27:09.773359 | orchestrator | 2025-06-02 13:27:09.773368 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-02 13:27:09.773377 | orchestrator | Monday 02 June 2025 13:25:52 +0000 (0:00:00.693) 0:00:23.455 *********** 2025-06-02 13:27:09.773387 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:27:09.773396 | orchestrator | 2025-06-02 13:27:09.773406 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-02 13:27:09.773416 | orchestrator | Monday 02 June 2025 13:25:54 +0000 (0:00:02.107) 0:00:25.563 *********** 2025-06-02 13:27:09.773425 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:27:09.773434 | orchestrator | 2025-06-02 13:27:09.773444 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-02 13:27:09.773453 | orchestrator | Monday 02 June 2025 13:25:56 +0000 (0:00:02.000) 0:00:27.563 *********** 2025-06-02 13:27:09.773463 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:27:09.773473 | orchestrator | 2025-06-02 13:27:09.773482 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 13:27:09.773492 | orchestrator | Monday 02 June 2025 13:26:11 +0000 (0:00:14.690) 0:00:42.254 *********** 2025-06-02 13:27:09.773501 | orchestrator | 2025-06-02 13:27:09.773511 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 13:27:09.773520 | orchestrator | Monday 02 June 2025 13:26:11 +0000 (0:00:00.071) 0:00:42.325 *********** 2025-06-02 13:27:09.773530 | orchestrator | 2025-06-02 13:27:09.773539 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 13:27:09.773548 | orchestrator | Monday 02 June 2025 13:26:11 +0000 (0:00:00.061) 0:00:42.387 *********** 2025-06-02 13:27:09.773558 | orchestrator | 2025-06-02 13:27:09.773567 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-02 13:27:09.773577 | orchestrator | Monday 02 June 2025 13:26:11 +0000 (0:00:00.063) 0:00:42.451 *********** 2025-06-02 13:27:09.773594 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:27:09.773604 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:27:09.773613 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:27:09.773623 | orchestrator | 2025-06-02 13:27:09.773632 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:27:09.773642 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-02 13:27:09.773652 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 13:27:09.773662 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 13:27:09.773672 | orchestrator | 2025-06-02 13:27:09.773681 | orchestrator | 2025-06-02 13:27:09.773691 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:27:09.773701 | orchestrator | Monday 02 June 2025 13:27:07 +0000 (0:00:55.509) 0:01:37.960 *********** 2025-06-02 13:27:09.773710 | orchestrator | =============================================================================== 2025-06-02 13:27:09.773719 | orchestrator | horizon : Restart horizon container ------------------------------------ 55.51s 2025-06-02 13:27:09.773749 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.69s 2025-06-02 13:27:09.773759 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.11s 2025-06-02 13:27:09.773768 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.00s 2025-06-02 13:27:09.773778 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.93s 2025-06-02 13:27:09.773787 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.72s 2025-06-02 13:27:09.773797 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.70s 2025-06-02 13:27:09.773806 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.53s 2025-06-02 13:27:09.773815 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.46s 2025-06-02 13:27:09.773824 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.15s 2025-06-02 13:27:09.773834 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.04s 2025-06-02 13:27:09.773843 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.91s 2025-06-02 13:27:09.773853 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2025-06-02 13:27:09.773862 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2025-06-02 13:27:09.773872 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-06-02 13:27:09.773881 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2025-06-02 13:27:09.773890 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2025-06-02 13:27:09.773900 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.37s 2025-06-02 13:27:09.773909 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.36s 2025-06-02 13:27:09.773918 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.34s 2025-06-02 13:27:09.773928 | orchestrator | 2025-06-02 13:27:09 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:09.773937 | orchestrator | 2025-06-02 13:27:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:12.808869 | orchestrator | 2025-06-02 13:27:12 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:12.811372 | orchestrator | 2025-06-02 13:27:12 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:12.811408 | orchestrator | 2025-06-02 13:27:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:15.854389 | orchestrator | 2025-06-02 13:27:15 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:15.856578 | orchestrator | 2025-06-02 13:27:15 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:15.856664 | orchestrator | 2025-06-02 13:27:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:18.898077 | orchestrator | 2025-06-02 13:27:18 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:18.899395 | orchestrator | 2025-06-02 13:27:18 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:18.899429 | orchestrator | 2025-06-02 13:27:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:21.950355 | orchestrator | 2025-06-02 13:27:21 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:21.950954 | orchestrator | 2025-06-02 13:27:21 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:21.950988 | orchestrator | 2025-06-02 13:27:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:24.994636 | orchestrator | 2025-06-02 13:27:24 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:24.996547 | orchestrator | 2025-06-02 13:27:24 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:24.996770 | orchestrator | 2025-06-02 13:27:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:28.040469 | orchestrator | 2025-06-02 13:27:28 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:28.041092 | orchestrator | 2025-06-02 13:27:28 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:28.041383 | orchestrator | 2025-06-02 13:27:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:31.087135 | orchestrator | 2025-06-02 13:27:31 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:31.089101 | orchestrator | 2025-06-02 13:27:31 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:31.089148 | orchestrator | 2025-06-02 13:27:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:34.130014 | orchestrator | 2025-06-02 13:27:34 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:34.132425 | orchestrator | 2025-06-02 13:27:34 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:34.132460 | orchestrator | 2025-06-02 13:27:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:37.177197 | orchestrator | 2025-06-02 13:27:37 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:37.178461 | orchestrator | 2025-06-02 13:27:37 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:37.178500 | orchestrator | 2025-06-02 13:27:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:40.222522 | orchestrator | 2025-06-02 13:27:40 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:40.223909 | orchestrator | 2025-06-02 13:27:40 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:40.223948 | orchestrator | 2025-06-02 13:27:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:43.264062 | orchestrator | 2025-06-02 13:27:43 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:43.265711 | orchestrator | 2025-06-02 13:27:43 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:43.265741 | orchestrator | 2025-06-02 13:27:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:46.310428 | orchestrator | 2025-06-02 13:27:46 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:46.310771 | orchestrator | 2025-06-02 13:27:46 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:46.310803 | orchestrator | 2025-06-02 13:27:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:49.344130 | orchestrator | 2025-06-02 13:27:49 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state STARTED 2025-06-02 13:27:49.346330 | orchestrator | 2025-06-02 13:27:49 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:49.346389 | orchestrator | 2025-06-02 13:27:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:52.399279 | orchestrator | 2025-06-02 13:27:52 | INFO  | Task d03535ee-9868-4cbf-a7ec-78c8903b29bb is in state STARTED 2025-06-02 13:27:52.401058 | orchestrator | 2025-06-02 13:27:52 | INFO  | Task a133e337-24c7-441a-b476-724632153293 is in state SUCCESS 2025-06-02 13:27:52.403150 | orchestrator | 2025-06-02 13:27:52 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:27:52.404925 | orchestrator | 2025-06-02 13:27:52 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:27:52.406604 | orchestrator | 2025-06-02 13:27:52 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:52.407013 | orchestrator | 2025-06-02 13:27:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:55.449905 | orchestrator | 2025-06-02 13:27:55 | INFO  | Task d03535ee-9868-4cbf-a7ec-78c8903b29bb is in state STARTED 2025-06-02 13:27:55.456944 | orchestrator | 2025-06-02 13:27:55 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:27:55.457000 | orchestrator | 2025-06-02 13:27:55 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:27:55.457735 | orchestrator | 2025-06-02 13:27:55 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:55.457917 | orchestrator | 2025-06-02 13:27:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:27:58.512625 | orchestrator | 2025-06-02 13:27:58 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:27:58.512719 | orchestrator | 2025-06-02 13:27:58 | INFO  | Task d03535ee-9868-4cbf-a7ec-78c8903b29bb is in state SUCCESS 2025-06-02 13:27:58.512734 | orchestrator | 2025-06-02 13:27:58 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:27:58.513447 | orchestrator | 2025-06-02 13:27:58 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:27:58.513472 | orchestrator | 2025-06-02 13:27:58 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:27:58.513486 | orchestrator | 2025-06-02 13:27:58 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:27:58.513500 | orchestrator | 2025-06-02 13:27:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:01.553192 | orchestrator | 2025-06-02 13:28:01 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:01.553309 | orchestrator | 2025-06-02 13:28:01 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:01.553934 | orchestrator | 2025-06-02 13:28:01 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:01.556767 | orchestrator | 2025-06-02 13:28:01 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:01.556798 | orchestrator | 2025-06-02 13:28:01 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:28:01.556809 | orchestrator | 2025-06-02 13:28:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:04.591058 | orchestrator | 2025-06-02 13:28:04 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:04.591781 | orchestrator | 2025-06-02 13:28:04 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:04.593005 | orchestrator | 2025-06-02 13:28:04 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:04.593725 | orchestrator | 2025-06-02 13:28:04 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:04.594824 | orchestrator | 2025-06-02 13:28:04 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:28:04.594855 | orchestrator | 2025-06-02 13:28:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:07.637203 | orchestrator | 2025-06-02 13:28:07 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:07.637479 | orchestrator | 2025-06-02 13:28:07 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:07.638141 | orchestrator | 2025-06-02 13:28:07 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:07.638404 | orchestrator | 2025-06-02 13:28:07 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:07.639076 | orchestrator | 2025-06-02 13:28:07 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state STARTED 2025-06-02 13:28:07.639099 | orchestrator | 2025-06-02 13:28:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:10.671270 | orchestrator | 2025-06-02 13:28:10 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:10.672305 | orchestrator | 2025-06-02 13:28:10 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:10.672382 | orchestrator | 2025-06-02 13:28:10 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:10.673255 | orchestrator | 2025-06-02 13:28:10 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:10.674704 | orchestrator | 2025-06-02 13:28:10 | INFO  | Task 2fdbd5af-4935-454a-b29c-9d068b66ede4 is in state SUCCESS 2025-06-02 13:28:10.677936 | orchestrator | 2025-06-02 13:28:10.677970 | orchestrator | 2025-06-02 13:28:10.677982 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-02 13:28:10.677994 | orchestrator | 2025-06-02 13:28:10.678005 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-02 13:28:10.678060 | orchestrator | Monday 02 June 2025 13:26:57 +0000 (0:00:00.216) 0:00:00.216 *********** 2025-06-02 13:28:10.678076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-02 13:28:10.678088 | orchestrator | 2025-06-02 13:28:10.678113 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-02 13:28:10.678125 | orchestrator | Monday 02 June 2025 13:26:58 +0000 (0:00:00.200) 0:00:00.417 *********** 2025-06-02 13:28:10.678137 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-02 13:28:10.678148 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-02 13:28:10.678159 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-02 13:28:10.678170 | orchestrator | 2025-06-02 13:28:10.678181 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-02 13:28:10.678192 | orchestrator | Monday 02 June 2025 13:26:59 +0000 (0:00:01.144) 0:00:01.562 *********** 2025-06-02 13:28:10.678203 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-02 13:28:10.678214 | orchestrator | 2025-06-02 13:28:10.678243 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-02 13:28:10.678254 | orchestrator | Monday 02 June 2025 13:27:00 +0000 (0:00:01.148) 0:00:02.710 *********** 2025-06-02 13:28:10.678265 | orchestrator | changed: [testbed-manager] 2025-06-02 13:28:10.678276 | orchestrator | 2025-06-02 13:28:10.678287 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-02 13:28:10.678298 | orchestrator | Monday 02 June 2025 13:27:01 +0000 (0:00:00.947) 0:00:03.657 *********** 2025-06-02 13:28:10.678309 | orchestrator | changed: [testbed-manager] 2025-06-02 13:28:10.678319 | orchestrator | 2025-06-02 13:28:10.678330 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-02 13:28:10.678341 | orchestrator | Monday 02 June 2025 13:27:02 +0000 (0:00:00.897) 0:00:04.555 *********** 2025-06-02 13:28:10.678352 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-02 13:28:10.678384 | orchestrator | ok: [testbed-manager] 2025-06-02 13:28:10.678927 | orchestrator | 2025-06-02 13:28:10.678949 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-02 13:28:10.678960 | orchestrator | Monday 02 June 2025 13:27:41 +0000 (0:00:38.825) 0:00:43.380 *********** 2025-06-02 13:28:10.678971 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-02 13:28:10.678982 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-02 13:28:10.678993 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-02 13:28:10.679003 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-02 13:28:10.679014 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-02 13:28:10.679024 | orchestrator | 2025-06-02 13:28:10.679035 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-02 13:28:10.679247 | orchestrator | Monday 02 June 2025 13:27:44 +0000 (0:00:03.862) 0:00:47.243 *********** 2025-06-02 13:28:10.679266 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-02 13:28:10.679277 | orchestrator | 2025-06-02 13:28:10.679288 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-02 13:28:10.679299 | orchestrator | Monday 02 June 2025 13:27:45 +0000 (0:00:00.415) 0:00:47.658 *********** 2025-06-02 13:28:10.679309 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:28:10.679320 | orchestrator | 2025-06-02 13:28:10.679330 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-02 13:28:10.679341 | orchestrator | Monday 02 June 2025 13:27:45 +0000 (0:00:00.122) 0:00:47.781 *********** 2025-06-02 13:28:10.679352 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:28:10.679363 | orchestrator | 2025-06-02 13:28:10.679373 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-02 13:28:10.679384 | orchestrator | Monday 02 June 2025 13:27:45 +0000 (0:00:00.279) 0:00:48.060 *********** 2025-06-02 13:28:10.679395 | orchestrator | changed: [testbed-manager] 2025-06-02 13:28:10.679405 | orchestrator | 2025-06-02 13:28:10.679416 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-02 13:28:10.679427 | orchestrator | Monday 02 June 2025 13:27:47 +0000 (0:00:01.357) 0:00:49.418 *********** 2025-06-02 13:28:10.679437 | orchestrator | changed: [testbed-manager] 2025-06-02 13:28:10.679448 | orchestrator | 2025-06-02 13:28:10.679458 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-02 13:28:10.679469 | orchestrator | Monday 02 June 2025 13:27:47 +0000 (0:00:00.839) 0:00:50.258 *********** 2025-06-02 13:28:10.679479 | orchestrator | changed: [testbed-manager] 2025-06-02 13:28:10.679490 | orchestrator | 2025-06-02 13:28:10.679501 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-02 13:28:10.679511 | orchestrator | Monday 02 June 2025 13:27:48 +0000 (0:00:00.560) 0:00:50.818 *********** 2025-06-02 13:28:10.679522 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-02 13:28:10.679532 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-02 13:28:10.679543 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-02 13:28:10.679554 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-02 13:28:10.679564 | orchestrator | 2025-06-02 13:28:10.679575 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:28:10.679586 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:28:10.679597 | orchestrator | 2025-06-02 13:28:10.679608 | orchestrator | 2025-06-02 13:28:10.679659 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:28:10.679672 | orchestrator | Monday 02 June 2025 13:27:49 +0000 (0:00:01.422) 0:00:52.241 *********** 2025-06-02 13:28:10.679684 | orchestrator | =============================================================================== 2025-06-02 13:28:10.679694 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.83s 2025-06-02 13:28:10.679705 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.86s 2025-06-02 13:28:10.679734 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.42s 2025-06-02 13:28:10.679745 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.36s 2025-06-02 13:28:10.679756 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-06-02 13:28:10.679766 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.14s 2025-06-02 13:28:10.679777 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2025-06-02 13:28:10.679787 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2025-06-02 13:28:10.679798 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2025-06-02 13:28:10.679809 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2025-06-02 13:28:10.679819 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2025-06-02 13:28:10.679830 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-06-02 13:28:10.679840 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-06-02 13:28:10.679851 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-06-02 13:28:10.679862 | orchestrator | 2025-06-02 13:28:10.679872 | orchestrator | 2025-06-02 13:28:10.679883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:28:10.679894 | orchestrator | 2025-06-02 13:28:10.679904 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:28:10.679915 | orchestrator | Monday 02 June 2025 13:27:54 +0000 (0:00:00.200) 0:00:00.200 *********** 2025-06-02 13:28:10.679926 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.679936 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.679947 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.679957 | orchestrator | 2025-06-02 13:28:10.679968 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:28:10.679978 | orchestrator | Monday 02 June 2025 13:27:54 +0000 (0:00:00.301) 0:00:00.501 *********** 2025-06-02 13:28:10.679989 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 13:28:10.679999 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 13:28:10.680010 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 13:28:10.680021 | orchestrator | 2025-06-02 13:28:10.680031 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-02 13:28:10.680042 | orchestrator | 2025-06-02 13:28:10.680052 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-02 13:28:10.680063 | orchestrator | Monday 02 June 2025 13:27:55 +0000 (0:00:00.681) 0:00:01.183 *********** 2025-06-02 13:28:10.680074 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.680084 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.680095 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.680105 | orchestrator | 2025-06-02 13:28:10.680116 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:28:10.680127 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:28:10.680137 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:28:10.680148 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:28:10.680159 | orchestrator | 2025-06-02 13:28:10.680169 | orchestrator | 2025-06-02 13:28:10.680180 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:28:10.680191 | orchestrator | Monday 02 June 2025 13:27:55 +0000 (0:00:00.836) 0:00:02.019 *********** 2025-06-02 13:28:10.680201 | orchestrator | =============================================================================== 2025-06-02 13:28:10.680218 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.84s 2025-06-02 13:28:10.680257 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-06-02 13:28:10.680268 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-02 13:28:10.680279 | orchestrator | 2025-06-02 13:28:10.680290 | orchestrator | 2025-06-02 13:28:10.680301 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:28:10.680311 | orchestrator | 2025-06-02 13:28:10.680322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:28:10.680333 | orchestrator | Monday 02 June 2025 13:25:29 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-06-02 13:28:10.680344 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.680354 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.680365 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.680375 | orchestrator | 2025-06-02 13:28:10.680386 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:28:10.680397 | orchestrator | Monday 02 June 2025 13:25:30 +0000 (0:00:00.262) 0:00:00.514 *********** 2025-06-02 13:28:10.680407 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 13:28:10.680418 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 13:28:10.680429 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 13:28:10.680440 | orchestrator | 2025-06-02 13:28:10.680451 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-02 13:28:10.680461 | orchestrator | 2025-06-02 13:28:10.680507 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 13:28:10.680520 | orchestrator | Monday 02 June 2025 13:25:30 +0000 (0:00:00.386) 0:00:00.900 *********** 2025-06-02 13:28:10.680531 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:28:10.680542 | orchestrator | 2025-06-02 13:28:10.680553 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-02 13:28:10.680568 | orchestrator | Monday 02 June 2025 13:25:31 +0000 (0:00:00.544) 0:00:01.445 *********** 2025-06-02 13:28:10.680584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.680602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.680622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.680668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.680688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.680700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.680711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.680723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.680773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.680786 | orchestrator | 2025-06-02 13:28:10.680797 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-02 13:28:10.680808 | orchestrator | Monday 02 June 2025 13:25:32 +0000 (0:00:01.572) 0:00:03.017 *********** 2025-06-02 13:28:10.680819 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-02 13:28:10.680830 | orchestrator | 2025-06-02 13:28:10.680841 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-02 13:28:10.680852 | orchestrator | Monday 02 June 2025 13:25:33 +0000 (0:00:00.757) 0:00:03.775 *********** 2025-06-02 13:28:10.680863 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.680873 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.680884 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.680895 | orchestrator | 2025-06-02 13:28:10.680906 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-02 13:28:10.680916 | orchestrator | Monday 02 June 2025 13:25:33 +0000 (0:00:00.366) 0:00:04.141 *********** 2025-06-02 13:28:10.680927 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:28:10.680938 | orchestrator | 2025-06-02 13:28:10.680949 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 13:28:10.680966 | orchestrator | Monday 02 June 2025 13:25:34 +0000 (0:00:00.539) 0:00:04.680 *********** 2025-06-02 13:28:10.680977 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:28:10.680988 | orchestrator | 2025-06-02 13:28:10.680999 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-02 13:28:10.681010 | orchestrator | Monday 02 June 2025 13:25:34 +0000 (0:00:00.463) 0:00:05.144 *********** 2025-06-02 13:28:10.681026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681158 | orchestrator | 2025-06-02 13:28:10.681169 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-02 13:28:10.681180 | orchestrator | Monday 02 June 2025 13:25:37 +0000 (0:00:03.065) 0:00:08.210 *********** 2025-06-02 13:28:10.681199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:28:10.681282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:28:10.681319 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.681331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:28:10.681343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:28:10.681366 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.681391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:28:10.681404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:28:10.681433 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.681444 | orchestrator | 2025-06-02 13:28:10.681454 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-02 13:28:10.681465 | orchestrator | Monday 02 June 2025 13:25:38 +0000 (0:00:00.525) 0:00:08.735 *********** 2025-06-02 13:28:10.681477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:28:10.681489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:28:10.681524 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.681536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:28:10.681557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:28:10.681580 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.681591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 13:28:10.681610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 13:28:10.681643 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.681654 | orchestrator | 2025-06-02 13:28:10.681665 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-02 13:28:10.681676 | orchestrator | Monday 02 June 2025 13:25:39 +0000 (0:00:00.871) 0:00:09.607 *********** 2025-06-02 13:28:10.681687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681797 | orchestrator | 2025-06-02 13:28:10.681807 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-02 13:28:10.681822 | orchestrator | Monday 02 June 2025 13:25:42 +0000 (0:00:03.691) 0:00:13.298 *********** 2025-06-02 13:28:10.681840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.681922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.681954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.681984 | orchestrator | 2025-06-02 13:28:10.681994 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-02 13:28:10.682003 | orchestrator | Monday 02 June 2025 13:25:47 +0000 (0:00:04.511) 0:00:17.810 *********** 2025-06-02 13:28:10.682013 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:28:10.682073 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.682083 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:28:10.682093 | orchestrator | 2025-06-02 13:28:10.682102 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-02 13:28:10.682112 | orchestrator | Monday 02 June 2025 13:25:48 +0000 (0:00:01.335) 0:00:19.146 *********** 2025-06-02 13:28:10.682122 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.682131 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.682141 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.682150 | orchestrator | 2025-06-02 13:28:10.682160 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-02 13:28:10.682176 | orchestrator | Monday 02 June 2025 13:25:49 +0000 (0:00:00.511) 0:00:19.658 *********** 2025-06-02 13:28:10.682186 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.682196 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.682205 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.682215 | orchestrator | 2025-06-02 13:28:10.682241 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-02 13:28:10.682252 | orchestrator | Monday 02 June 2025 13:25:49 +0000 (0:00:00.463) 0:00:20.122 *********** 2025-06-02 13:28:10.682261 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.682271 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.682280 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.682290 | orchestrator | 2025-06-02 13:28:10.682299 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-02 13:28:10.682309 | orchestrator | Monday 02 June 2025 13:25:50 +0000 (0:00:00.346) 0:00:20.469 *********** 2025-06-02 13:28:10.682334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.682346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.682357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.682368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.682390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.682405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 13:28:10.682416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.682426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.682436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.682451 | orchestrator | 2025-06-02 13:28:10.682461 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 13:28:10.682471 | orchestrator | Monday 02 June 2025 13:25:52 +0000 (0:00:02.308) 0:00:22.778 *********** 2025-06-02 13:28:10.682480 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.682490 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.682499 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.682509 | orchestrator | 2025-06-02 13:28:10.682518 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-02 13:28:10.682528 | orchestrator | Monday 02 June 2025 13:25:52 +0000 (0:00:00.315) 0:00:23.093 *********** 2025-06-02 13:28:10.682538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 13:28:10.682547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 13:28:10.682557 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 13:28:10.682567 | orchestrator | 2025-06-02 13:28:10.682576 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-02 13:28:10.682586 | orchestrator | Monday 02 June 2025 13:25:54 +0000 (0:00:02.003) 0:00:25.096 *********** 2025-06-02 13:28:10.682596 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:28:10.682605 | orchestrator | 2025-06-02 13:28:10.682615 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-02 13:28:10.682624 | orchestrator | Monday 02 June 2025 13:25:55 +0000 (0:00:00.905) 0:00:26.001 *********** 2025-06-02 13:28:10.682634 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.682643 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.682653 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.682662 | orchestrator | 2025-06-02 13:28:10.682672 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-02 13:28:10.682682 | orchestrator | Monday 02 June 2025 13:25:56 +0000 (0:00:00.562) 0:00:26.564 *********** 2025-06-02 13:28:10.682691 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 13:28:10.682706 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:28:10.682716 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 13:28:10.682725 | orchestrator | 2025-06-02 13:28:10.682735 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-02 13:28:10.682745 | orchestrator | Monday 02 June 2025 13:25:57 +0000 (0:00:01.126) 0:00:27.690 *********** 2025-06-02 13:28:10.682754 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.682764 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.682773 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.682783 | orchestrator | 2025-06-02 13:28:10.682797 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-02 13:28:10.682807 | orchestrator | Monday 02 June 2025 13:25:57 +0000 (0:00:00.310) 0:00:28.000 *********** 2025-06-02 13:28:10.682816 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 13:28:10.682826 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 13:28:10.682835 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 13:28:10.682845 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 13:28:10.682854 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 13:28:10.682864 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 13:28:10.682873 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 13:28:10.682883 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 13:28:10.682898 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 13:28:10.682908 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 13:28:10.682917 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 13:28:10.682927 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 13:28:10.682936 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 13:28:10.682946 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 13:28:10.682956 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 13:28:10.682965 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 13:28:10.682975 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 13:28:10.682984 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 13:28:10.682994 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 13:28:10.683004 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 13:28:10.683013 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 13:28:10.683023 | orchestrator | 2025-06-02 13:28:10.683032 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-02 13:28:10.683042 | orchestrator | Monday 02 June 2025 13:26:06 +0000 (0:00:08.731) 0:00:36.731 *********** 2025-06-02 13:28:10.683051 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 13:28:10.683061 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 13:28:10.683070 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 13:28:10.683080 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 13:28:10.683090 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 13:28:10.683099 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 13:28:10.683109 | orchestrator | 2025-06-02 13:28:10.683118 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-02 13:28:10.683128 | orchestrator | Monday 02 June 2025 13:26:08 +0000 (0:00:02.593) 0:00:39.324 *********** 2025-06-02 13:28:10.683165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.683179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.683196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 13:28:10.683207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.683217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.683277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 13:28:10.683293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.683310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.683320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 13:28:10.683330 | orchestrator | 2025-06-02 13:28:10.683340 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 13:28:10.683350 | orchestrator | Monday 02 June 2025 13:26:11 +0000 (0:00:02.298) 0:00:41.623 *********** 2025-06-02 13:28:10.683359 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.683369 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.683379 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.683388 | orchestrator | 2025-06-02 13:28:10.683398 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-02 13:28:10.683408 | orchestrator | Monday 02 June 2025 13:26:11 +0000 (0:00:00.322) 0:00:41.945 *********** 2025-06-02 13:28:10.683417 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.683427 | orchestrator | 2025-06-02 13:28:10.683436 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-02 13:28:10.683446 | orchestrator | Monday 02 June 2025 13:26:13 +0000 (0:00:02.082) 0:00:44.028 *********** 2025-06-02 13:28:10.683455 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.683465 | orchestrator | 2025-06-02 13:28:10.683474 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-02 13:28:10.683484 | orchestrator | Monday 02 June 2025 13:26:16 +0000 (0:00:02.857) 0:00:46.885 *********** 2025-06-02 13:28:10.683494 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.683503 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.683513 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.683520 | orchestrator | 2025-06-02 13:28:10.683528 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-02 13:28:10.683536 | orchestrator | Monday 02 June 2025 13:26:17 +0000 (0:00:00.936) 0:00:47.822 *********** 2025-06-02 13:28:10.683544 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.683552 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.683560 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.683567 | orchestrator | 2025-06-02 13:28:10.683575 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-02 13:28:10.683583 | orchestrator | Monday 02 June 2025 13:26:17 +0000 (0:00:00.314) 0:00:48.136 *********** 2025-06-02 13:28:10.683591 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.683613 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.683622 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.683629 | orchestrator | 2025-06-02 13:28:10.683637 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-02 13:28:10.683645 | orchestrator | Monday 02 June 2025 13:26:18 +0000 (0:00:00.349) 0:00:48.486 *********** 2025-06-02 13:28:10.683653 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.683660 | orchestrator | 2025-06-02 13:28:10.683668 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-02 13:28:10.683676 | orchestrator | Monday 02 June 2025 13:26:31 +0000 (0:00:13.438) 0:01:01.925 *********** 2025-06-02 13:28:10.683683 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.683691 | orchestrator | 2025-06-02 13:28:10.683703 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 13:28:10.683711 | orchestrator | Monday 02 June 2025 13:26:40 +0000 (0:00:09.204) 0:01:11.130 *********** 2025-06-02 13:28:10.683719 | orchestrator | 2025-06-02 13:28:10.683727 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 13:28:10.683735 | orchestrator | Monday 02 June 2025 13:26:40 +0000 (0:00:00.239) 0:01:11.369 *********** 2025-06-02 13:28:10.683742 | orchestrator | 2025-06-02 13:28:10.683750 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 13:28:10.683761 | orchestrator | Monday 02 June 2025 13:26:41 +0000 (0:00:00.064) 0:01:11.434 *********** 2025-06-02 13:28:10.683769 | orchestrator | 2025-06-02 13:28:10.683777 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-02 13:28:10.683785 | orchestrator | Monday 02 June 2025 13:26:41 +0000 (0:00:00.058) 0:01:11.493 *********** 2025-06-02 13:28:10.683793 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.683801 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:28:10.683808 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:28:10.683816 | orchestrator | 2025-06-02 13:28:10.683824 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-02 13:28:10.683832 | orchestrator | Monday 02 June 2025 13:27:02 +0000 (0:00:21.773) 0:01:33.266 *********** 2025-06-02 13:28:10.683839 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:28:10.683847 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:28:10.683855 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.683863 | orchestrator | 2025-06-02 13:28:10.683871 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-02 13:28:10.683878 | orchestrator | Monday 02 June 2025 13:27:10 +0000 (0:00:07.490) 0:01:40.757 *********** 2025-06-02 13:28:10.683886 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:28:10.683894 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.683902 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:28:10.683909 | orchestrator | 2025-06-02 13:28:10.683917 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 13:28:10.683925 | orchestrator | Monday 02 June 2025 13:27:21 +0000 (0:00:11.582) 0:01:52.339 *********** 2025-06-02 13:28:10.683933 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:28:10.683941 | orchestrator | 2025-06-02 13:28:10.683949 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-02 13:28:10.683957 | orchestrator | Monday 02 June 2025 13:27:22 +0000 (0:00:00.776) 0:01:53.116 *********** 2025-06-02 13:28:10.683965 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.683972 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:28:10.683980 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:28:10.683988 | orchestrator | 2025-06-02 13:28:10.683996 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-02 13:28:10.684004 | orchestrator | Monday 02 June 2025 13:27:23 +0000 (0:00:00.712) 0:01:53.829 *********** 2025-06-02 13:28:10.684011 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:28:10.684019 | orchestrator | 2025-06-02 13:28:10.684027 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-02 13:28:10.684039 | orchestrator | Monday 02 June 2025 13:27:25 +0000 (0:00:01.743) 0:01:55.572 *********** 2025-06-02 13:28:10.684047 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-02 13:28:10.684055 | orchestrator | 2025-06-02 13:28:10.684062 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-02 13:28:10.684070 | orchestrator | Monday 02 June 2025 13:27:35 +0000 (0:00:09.911) 0:02:05.484 *********** 2025-06-02 13:28:10.684078 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-02 13:28:10.684086 | orchestrator | 2025-06-02 13:28:10.684094 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-02 13:28:10.684101 | orchestrator | Monday 02 June 2025 13:27:55 +0000 (0:00:20.612) 0:02:26.096 *********** 2025-06-02 13:28:10.684109 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-02 13:28:10.684117 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-02 13:28:10.684125 | orchestrator | 2025-06-02 13:28:10.684132 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-02 13:28:10.684140 | orchestrator | Monday 02 June 2025 13:28:02 +0000 (0:00:06.930) 0:02:33.027 *********** 2025-06-02 13:28:10.684148 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.684156 | orchestrator | 2025-06-02 13:28:10.684163 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-02 13:28:10.684171 | orchestrator | Monday 02 June 2025 13:28:03 +0000 (0:00:00.471) 0:02:33.499 *********** 2025-06-02 13:28:10.684179 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.684187 | orchestrator | 2025-06-02 13:28:10.684195 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-02 13:28:10.684203 | orchestrator | Monday 02 June 2025 13:28:03 +0000 (0:00:00.250) 0:02:33.749 *********** 2025-06-02 13:28:10.684210 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.684218 | orchestrator | 2025-06-02 13:28:10.684237 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-02 13:28:10.684245 | orchestrator | Monday 02 June 2025 13:28:03 +0000 (0:00:00.120) 0:02:33.870 *********** 2025-06-02 13:28:10.684253 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.684261 | orchestrator | 2025-06-02 13:28:10.684268 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-02 13:28:10.684276 | orchestrator | Monday 02 June 2025 13:28:03 +0000 (0:00:00.265) 0:02:34.135 *********** 2025-06-02 13:28:10.684284 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:28:10.684292 | orchestrator | 2025-06-02 13:28:10.684300 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 13:28:10.684307 | orchestrator | Monday 02 June 2025 13:28:06 +0000 (0:00:03.212) 0:02:37.347 *********** 2025-06-02 13:28:10.684315 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:28:10.684323 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:28:10.684331 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:28:10.684339 | orchestrator | 2025-06-02 13:28:10.684351 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:28:10.684360 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-02 13:28:10.684372 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 13:28:10.684380 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 13:28:10.684388 | orchestrator | 2025-06-02 13:28:10.684396 | orchestrator | 2025-06-02 13:28:10.684404 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:28:10.684411 | orchestrator | Monday 02 June 2025 13:28:07 +0000 (0:00:00.587) 0:02:37.934 *********** 2025-06-02 13:28:10.684424 | orchestrator | =============================================================================== 2025-06-02 13:28:10.684432 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 21.77s 2025-06-02 13:28:10.684439 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.61s 2025-06-02 13:28:10.684447 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.44s 2025-06-02 13:28:10.684455 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.58s 2025-06-02 13:28:10.684463 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.91s 2025-06-02 13:28:10.684470 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.20s 2025-06-02 13:28:10.684478 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.73s 2025-06-02 13:28:10.684486 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.49s 2025-06-02 13:28:10.684494 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.93s 2025-06-02 13:28:10.684501 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.51s 2025-06-02 13:28:10.684509 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.69s 2025-06-02 13:28:10.684517 | orchestrator | keystone : Creating default user role ----------------------------------- 3.21s 2025-06-02 13:28:10.684525 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.07s 2025-06-02 13:28:10.684532 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.86s 2025-06-02 13:28:10.684540 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.59s 2025-06-02 13:28:10.684548 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.31s 2025-06-02 13:28:10.684556 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.30s 2025-06-02 13:28:10.684563 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.08s 2025-06-02 13:28:10.684571 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.00s 2025-06-02 13:28:10.684579 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2025-06-02 13:28:10.684586 | orchestrator | 2025-06-02 13:28:10 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:10.684594 | orchestrator | 2025-06-02 13:28:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:13.722071 | orchestrator | 2025-06-02 13:28:13 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:13.722558 | orchestrator | 2025-06-02 13:28:13 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:13.724302 | orchestrator | 2025-06-02 13:28:13 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:13.726384 | orchestrator | 2025-06-02 13:28:13 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:13.726774 | orchestrator | 2025-06-02 13:28:13 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:13.726795 | orchestrator | 2025-06-02 13:28:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:16.759467 | orchestrator | 2025-06-02 13:28:16 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:16.759564 | orchestrator | 2025-06-02 13:28:16 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:16.759580 | orchestrator | 2025-06-02 13:28:16 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:16.760765 | orchestrator | 2025-06-02 13:28:16 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:16.761453 | orchestrator | 2025-06-02 13:28:16 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:16.761598 | orchestrator | 2025-06-02 13:28:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:19.799651 | orchestrator | 2025-06-02 13:28:19 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:19.803910 | orchestrator | 2025-06-02 13:28:19 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:19.804548 | orchestrator | 2025-06-02 13:28:19 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:19.805396 | orchestrator | 2025-06-02 13:28:19 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:19.806207 | orchestrator | 2025-06-02 13:28:19 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:19.807389 | orchestrator | 2025-06-02 13:28:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:22.832953 | orchestrator | 2025-06-02 13:28:22 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:22.833425 | orchestrator | 2025-06-02 13:28:22 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:22.834074 | orchestrator | 2025-06-02 13:28:22 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:22.834661 | orchestrator | 2025-06-02 13:28:22 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:22.835475 | orchestrator | 2025-06-02 13:28:22 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:22.835508 | orchestrator | 2025-06-02 13:28:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:25.875396 | orchestrator | 2025-06-02 13:28:25 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:25.875674 | orchestrator | 2025-06-02 13:28:25 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:25.876362 | orchestrator | 2025-06-02 13:28:25 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:25.877147 | orchestrator | 2025-06-02 13:28:25 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:25.877746 | orchestrator | 2025-06-02 13:28:25 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:25.880421 | orchestrator | 2025-06-02 13:28:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:28.909327 | orchestrator | 2025-06-02 13:28:28 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state STARTED 2025-06-02 13:28:28.909423 | orchestrator | 2025-06-02 13:28:28 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:28.909439 | orchestrator | 2025-06-02 13:28:28 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:28.909452 | orchestrator | 2025-06-02 13:28:28 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:28.913206 | orchestrator | 2025-06-02 13:28:28 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:28.913232 | orchestrator | 2025-06-02 13:28:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:31.950687 | orchestrator | 2025-06-02 13:28:31 | INFO  | Task e2bab499-bbd6-4377-9e69-9c5b0598905e is in state SUCCESS 2025-06-02 13:28:31.950776 | orchestrator | 2025-06-02 13:28:31 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:31.952472 | orchestrator | 2025-06-02 13:28:31 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:31.953209 | orchestrator | 2025-06-02 13:28:31 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:31.954137 | orchestrator | 2025-06-02 13:28:31 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:31.954881 | orchestrator | 2025-06-02 13:28:31 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:31.955058 | orchestrator | 2025-06-02 13:28:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:34.980680 | orchestrator | 2025-06-02 13:28:34 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:34.980774 | orchestrator | 2025-06-02 13:28:34 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:34.981851 | orchestrator | 2025-06-02 13:28:34 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:34.982781 | orchestrator | 2025-06-02 13:28:34 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:34.984411 | orchestrator | 2025-06-02 13:28:34 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:34.984434 | orchestrator | 2025-06-02 13:28:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:38.021713 | orchestrator | 2025-06-02 13:28:38 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:38.022732 | orchestrator | 2025-06-02 13:28:38 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:38.024923 | orchestrator | 2025-06-02 13:28:38 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:38.030813 | orchestrator | 2025-06-02 13:28:38 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:38.032788 | orchestrator | 2025-06-02 13:28:38 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:38.032831 | orchestrator | 2025-06-02 13:28:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:41.077066 | orchestrator | 2025-06-02 13:28:41 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:41.077322 | orchestrator | 2025-06-02 13:28:41 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:41.078124 | orchestrator | 2025-06-02 13:28:41 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:41.078928 | orchestrator | 2025-06-02 13:28:41 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:41.080224 | orchestrator | 2025-06-02 13:28:41 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:41.080278 | orchestrator | 2025-06-02 13:28:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:44.101985 | orchestrator | 2025-06-02 13:28:44 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:44.102136 | orchestrator | 2025-06-02 13:28:44 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:44.102788 | orchestrator | 2025-06-02 13:28:44 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:44.103157 | orchestrator | 2025-06-02 13:28:44 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:44.104013 | orchestrator | 2025-06-02 13:28:44 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:44.104036 | orchestrator | 2025-06-02 13:28:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:47.126778 | orchestrator | 2025-06-02 13:28:47 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:47.127118 | orchestrator | 2025-06-02 13:28:47 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:47.127821 | orchestrator | 2025-06-02 13:28:47 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:47.128574 | orchestrator | 2025-06-02 13:28:47 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:47.129175 | orchestrator | 2025-06-02 13:28:47 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:47.129351 | orchestrator | 2025-06-02 13:28:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:50.155639 | orchestrator | 2025-06-02 13:28:50 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:50.155938 | orchestrator | 2025-06-02 13:28:50 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:50.156544 | orchestrator | 2025-06-02 13:28:50 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:50.157868 | orchestrator | 2025-06-02 13:28:50 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:50.159197 | orchestrator | 2025-06-02 13:28:50 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:50.159229 | orchestrator | 2025-06-02 13:28:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:53.191663 | orchestrator | 2025-06-02 13:28:53 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:53.192538 | orchestrator | 2025-06-02 13:28:53 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:53.192570 | orchestrator | 2025-06-02 13:28:53 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:53.193378 | orchestrator | 2025-06-02 13:28:53 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:53.194222 | orchestrator | 2025-06-02 13:28:53 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:53.194300 | orchestrator | 2025-06-02 13:28:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:56.239014 | orchestrator | 2025-06-02 13:28:56 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:56.239298 | orchestrator | 2025-06-02 13:28:56 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:56.240976 | orchestrator | 2025-06-02 13:28:56 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:56.241504 | orchestrator | 2025-06-02 13:28:56 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:56.242233 | orchestrator | 2025-06-02 13:28:56 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:56.242282 | orchestrator | 2025-06-02 13:28:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:28:59.283493 | orchestrator | 2025-06-02 13:28:59 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:28:59.283611 | orchestrator | 2025-06-02 13:28:59 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:28:59.284162 | orchestrator | 2025-06-02 13:28:59 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:28:59.284677 | orchestrator | 2025-06-02 13:28:59 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:28:59.285223 | orchestrator | 2025-06-02 13:28:59 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:28:59.285276 | orchestrator | 2025-06-02 13:28:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:02.318597 | orchestrator | 2025-06-02 13:29:02 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:02.318682 | orchestrator | 2025-06-02 13:29:02 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:02.318840 | orchestrator | 2025-06-02 13:29:02 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:29:02.319439 | orchestrator | 2025-06-02 13:29:02 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:02.320174 | orchestrator | 2025-06-02 13:29:02 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:02.320200 | orchestrator | 2025-06-02 13:29:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:05.353488 | orchestrator | 2025-06-02 13:29:05 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:05.354863 | orchestrator | 2025-06-02 13:29:05 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:05.355748 | orchestrator | 2025-06-02 13:29:05 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state STARTED 2025-06-02 13:29:05.355823 | orchestrator | 2025-06-02 13:29:05 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:05.357315 | orchestrator | 2025-06-02 13:29:05 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:05.357345 | orchestrator | 2025-06-02 13:29:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:08.387118 | orchestrator | 2025-06-02 13:29:08 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:08.387438 | orchestrator | 2025-06-02 13:29:08 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:08.387477 | orchestrator | 2025-06-02 13:29:08 | INFO  | Task 9d462b25-8830-4555-b38e-118735e465f8 is in state SUCCESS 2025-06-02 13:29:08.387724 | orchestrator | 2025-06-02 13:29:08.387747 | orchestrator | 2025-06-02 13:29:08.387760 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:29:08.387772 | orchestrator | 2025-06-02 13:29:08.387784 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:29:08.387797 | orchestrator | Monday 02 June 2025 13:28:00 +0000 (0:00:00.244) 0:00:00.244 *********** 2025-06-02 13:29:08.387809 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:29:08.387821 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:29:08.387833 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:29:08.387845 | orchestrator | ok: [testbed-manager] 2025-06-02 13:29:08.387856 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:29:08.387868 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:29:08.387879 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:29:08.387891 | orchestrator | 2025-06-02 13:29:08.387903 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:29:08.387915 | orchestrator | Monday 02 June 2025 13:28:01 +0000 (0:00:00.868) 0:00:01.112 *********** 2025-06-02 13:29:08.387927 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-02 13:29:08.387938 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-02 13:29:08.387950 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-02 13:29:08.387962 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-02 13:29:08.387973 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-02 13:29:08.387985 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-02 13:29:08.388021 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-02 13:29:08.388034 | orchestrator | 2025-06-02 13:29:08.388058 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 13:29:08.388069 | orchestrator | 2025-06-02 13:29:08.388080 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-02 13:29:08.388092 | orchestrator | Monday 02 June 2025 13:28:03 +0000 (0:00:01.532) 0:00:02.644 *********** 2025-06-02 13:29:08.388104 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:29:08.388117 | orchestrator | 2025-06-02 13:29:08.388129 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-02 13:29:08.388140 | orchestrator | Monday 02 June 2025 13:28:04 +0000 (0:00:01.388) 0:00:04.033 *********** 2025-06-02 13:29:08.388152 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-02 13:29:08.388164 | orchestrator | 2025-06-02 13:29:08.388176 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-02 13:29:08.388187 | orchestrator | Monday 02 June 2025 13:28:07 +0000 (0:00:03.302) 0:00:07.335 *********** 2025-06-02 13:29:08.388199 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-02 13:29:08.388212 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-02 13:29:08.388336 | orchestrator | 2025-06-02 13:29:08.388351 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-02 13:29:08.388362 | orchestrator | Monday 02 June 2025 13:28:13 +0000 (0:00:05.442) 0:00:12.778 *********** 2025-06-02 13:29:08.388373 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:29:08.388384 | orchestrator | 2025-06-02 13:29:08.388395 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-02 13:29:08.388406 | orchestrator | Monday 02 June 2025 13:28:16 +0000 (0:00:02.779) 0:00:15.558 *********** 2025-06-02 13:29:08.388416 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:29:08.388427 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-02 13:29:08.388438 | orchestrator | 2025-06-02 13:29:08.388448 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-02 13:29:08.388459 | orchestrator | Monday 02 June 2025 13:28:19 +0000 (0:00:03.805) 0:00:19.364 *********** 2025-06-02 13:29:08.388470 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:29:08.388481 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-02 13:29:08.388491 | orchestrator | 2025-06-02 13:29:08.388502 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-02 13:29:08.388513 | orchestrator | Monday 02 June 2025 13:28:25 +0000 (0:00:05.935) 0:00:25.299 *********** 2025-06-02 13:29:08.388524 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-02 13:29:08.388534 | orchestrator | 2025-06-02 13:29:08.388546 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:29:08.388556 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.388568 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.388579 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.388590 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.388601 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.388633 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.388645 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.388656 | orchestrator | 2025-06-02 13:29:08.388666 | orchestrator | 2025-06-02 13:29:08.388677 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:29:08.388688 | orchestrator | Monday 02 June 2025 13:28:30 +0000 (0:00:04.202) 0:00:29.502 *********** 2025-06-02 13:29:08.388699 | orchestrator | =============================================================================== 2025-06-02 13:29:08.388710 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.94s 2025-06-02 13:29:08.388720 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.44s 2025-06-02 13:29:08.388731 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.20s 2025-06-02 13:29:08.388742 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.81s 2025-06-02 13:29:08.388752 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.30s 2025-06-02 13:29:08.388763 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.78s 2025-06-02 13:29:08.388774 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.53s 2025-06-02 13:29:08.388784 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.39s 2025-06-02 13:29:08.388795 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2025-06-02 13:29:08.388806 | orchestrator | 2025-06-02 13:29:08.388816 | orchestrator | 2025-06-02 13:29:08.388834 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-02 13:29:08.388845 | orchestrator | 2025-06-02 13:29:08.388856 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-02 13:29:08.388866 | orchestrator | Monday 02 June 2025 13:27:54 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-06-02 13:29:08.388877 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.388888 | orchestrator | 2025-06-02 13:29:08.388898 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-02 13:29:08.388909 | orchestrator | Monday 02 June 2025 13:27:55 +0000 (0:00:01.425) 0:00:01.688 *********** 2025-06-02 13:29:08.388920 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.388931 | orchestrator | 2025-06-02 13:29:08.388942 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-02 13:29:08.388953 | orchestrator | Monday 02 June 2025 13:27:56 +0000 (0:00:00.791) 0:00:02.480 *********** 2025-06-02 13:29:08.388965 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.388979 | orchestrator | 2025-06-02 13:29:08.388991 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-02 13:29:08.389004 | orchestrator | Monday 02 June 2025 13:27:57 +0000 (0:00:00.868) 0:00:03.348 *********** 2025-06-02 13:29:08.389016 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.389029 | orchestrator | 2025-06-02 13:29:08.389042 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-02 13:29:08.389054 | orchestrator | Monday 02 June 2025 13:27:58 +0000 (0:00:00.995) 0:00:04.344 *********** 2025-06-02 13:29:08.389066 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.389078 | orchestrator | 2025-06-02 13:29:08.389090 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-02 13:29:08.389103 | orchestrator | Monday 02 June 2025 13:27:59 +0000 (0:00:01.078) 0:00:05.423 *********** 2025-06-02 13:29:08.389116 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.389128 | orchestrator | 2025-06-02 13:29:08.389140 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-02 13:29:08.389152 | orchestrator | Monday 02 June 2025 13:28:00 +0000 (0:00:00.916) 0:00:06.340 *********** 2025-06-02 13:29:08.389171 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.389184 | orchestrator | 2025-06-02 13:29:08.389196 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-02 13:29:08.389208 | orchestrator | Monday 02 June 2025 13:28:01 +0000 (0:00:01.051) 0:00:07.392 *********** 2025-06-02 13:29:08.389221 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.389234 | orchestrator | 2025-06-02 13:29:08.389247 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-02 13:29:08.389260 | orchestrator | Monday 02 June 2025 13:28:02 +0000 (0:00:01.040) 0:00:08.432 *********** 2025-06-02 13:29:08.389289 | orchestrator | changed: [testbed-manager] 2025-06-02 13:29:08.389302 | orchestrator | 2025-06-02 13:29:08.389314 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-02 13:29:08.389325 | orchestrator | Monday 02 June 2025 13:28:42 +0000 (0:00:40.563) 0:00:48.996 *********** 2025-06-02 13:29:08.389336 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:29:08.389346 | orchestrator | 2025-06-02 13:29:08.389357 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 13:29:08.389368 | orchestrator | 2025-06-02 13:29:08.389378 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 13:29:08.389389 | orchestrator | Monday 02 June 2025 13:28:43 +0000 (0:00:00.123) 0:00:49.120 *********** 2025-06-02 13:29:08.389400 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:29:08.389410 | orchestrator | 2025-06-02 13:29:08.389421 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 13:29:08.389432 | orchestrator | 2025-06-02 13:29:08.389442 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 13:29:08.389453 | orchestrator | Monday 02 June 2025 13:28:44 +0000 (0:00:01.352) 0:00:50.472 *********** 2025-06-02 13:29:08.389464 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:29:08.389474 | orchestrator | 2025-06-02 13:29:08.389485 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 13:29:08.389496 | orchestrator | 2025-06-02 13:29:08.389506 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 13:29:08.389517 | orchestrator | Monday 02 June 2025 13:28:55 +0000 (0:00:11.085) 0:01:01.558 *********** 2025-06-02 13:29:08.389528 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:29:08.389539 | orchestrator | 2025-06-02 13:29:08.389556 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:29:08.389568 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 13:29:08.389579 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.389590 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.389601 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:29:08.389612 | orchestrator | 2025-06-02 13:29:08.389623 | orchestrator | 2025-06-02 13:29:08.389633 | orchestrator | 2025-06-02 13:29:08.389644 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:29:08.389655 | orchestrator | Monday 02 June 2025 13:29:06 +0000 (0:00:11.079) 0:01:12.638 *********** 2025-06-02 13:29:08.389666 | orchestrator | =============================================================================== 2025-06-02 13:29:08.389676 | orchestrator | Create admin user ------------------------------------------------------ 40.56s 2025-06-02 13:29:08.389687 | orchestrator | Restart ceph manager service ------------------------------------------- 23.52s 2025-06-02 13:29:08.389698 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.43s 2025-06-02 13:29:08.389713 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.08s 2025-06-02 13:29:08.389730 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.05s 2025-06-02 13:29:08.389741 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.04s 2025-06-02 13:29:08.389751 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.00s 2025-06-02 13:29:08.389762 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.92s 2025-06-02 13:29:08.389773 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.87s 2025-06-02 13:29:08.389784 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.79s 2025-06-02 13:29:08.389794 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.12s 2025-06-02 13:29:08.389805 | orchestrator | 2025-06-02 13:29:08 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:08.390186 | orchestrator | 2025-06-02 13:29:08 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:08.390208 | orchestrator | 2025-06-02 13:29:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:11.425325 | orchestrator | 2025-06-02 13:29:11 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:11.425553 | orchestrator | 2025-06-02 13:29:11 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:11.426137 | orchestrator | 2025-06-02 13:29:11 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:11.426591 | orchestrator | 2025-06-02 13:29:11 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:11.426614 | orchestrator | 2025-06-02 13:29:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:14.458265 | orchestrator | 2025-06-02 13:29:14 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:14.458402 | orchestrator | 2025-06-02 13:29:14 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:14.458775 | orchestrator | 2025-06-02 13:29:14 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:14.459419 | orchestrator | 2025-06-02 13:29:14 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:14.459456 | orchestrator | 2025-06-02 13:29:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:17.484848 | orchestrator | 2025-06-02 13:29:17 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:17.485221 | orchestrator | 2025-06-02 13:29:17 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:17.485844 | orchestrator | 2025-06-02 13:29:17 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:17.486587 | orchestrator | 2025-06-02 13:29:17 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:17.486614 | orchestrator | 2025-06-02 13:29:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:20.514957 | orchestrator | 2025-06-02 13:29:20 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:20.515051 | orchestrator | 2025-06-02 13:29:20 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:20.515066 | orchestrator | 2025-06-02 13:29:20 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:20.515078 | orchestrator | 2025-06-02 13:29:20 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:20.515089 | orchestrator | 2025-06-02 13:29:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:23.535541 | orchestrator | 2025-06-02 13:29:23 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:23.536055 | orchestrator | 2025-06-02 13:29:23 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:23.537097 | orchestrator | 2025-06-02 13:29:23 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:23.538904 | orchestrator | 2025-06-02 13:29:23 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:23.538928 | orchestrator | 2025-06-02 13:29:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:26.560954 | orchestrator | 2025-06-02 13:29:26 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:26.562765 | orchestrator | 2025-06-02 13:29:26 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:26.563646 | orchestrator | 2025-06-02 13:29:26 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:26.564372 | orchestrator | 2025-06-02 13:29:26 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:26.564474 | orchestrator | 2025-06-02 13:29:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:29.603165 | orchestrator | 2025-06-02 13:29:29 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:29.604949 | orchestrator | 2025-06-02 13:29:29 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:29.605879 | orchestrator | 2025-06-02 13:29:29 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:29.607356 | orchestrator | 2025-06-02 13:29:29 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:29.607498 | orchestrator | 2025-06-02 13:29:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:32.638998 | orchestrator | 2025-06-02 13:29:32 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:32.639673 | orchestrator | 2025-06-02 13:29:32 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:32.640529 | orchestrator | 2025-06-02 13:29:32 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:32.643641 | orchestrator | 2025-06-02 13:29:32 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:32.643674 | orchestrator | 2025-06-02 13:29:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:35.672875 | orchestrator | 2025-06-02 13:29:35 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:35.675332 | orchestrator | 2025-06-02 13:29:35 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:35.677919 | orchestrator | 2025-06-02 13:29:35 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:35.680208 | orchestrator | 2025-06-02 13:29:35 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:35.681014 | orchestrator | 2025-06-02 13:29:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:38.723610 | orchestrator | 2025-06-02 13:29:38 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:38.724870 | orchestrator | 2025-06-02 13:29:38 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:38.725860 | orchestrator | 2025-06-02 13:29:38 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:38.726803 | orchestrator | 2025-06-02 13:29:38 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:38.727461 | orchestrator | 2025-06-02 13:29:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:41.779418 | orchestrator | 2025-06-02 13:29:41 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:41.779661 | orchestrator | 2025-06-02 13:29:41 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:41.780642 | orchestrator | 2025-06-02 13:29:41 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:41.781502 | orchestrator | 2025-06-02 13:29:41 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:41.782569 | orchestrator | 2025-06-02 13:29:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:44.826736 | orchestrator | 2025-06-02 13:29:44 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:44.827009 | orchestrator | 2025-06-02 13:29:44 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:44.827914 | orchestrator | 2025-06-02 13:29:44 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:44.828796 | orchestrator | 2025-06-02 13:29:44 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:44.828821 | orchestrator | 2025-06-02 13:29:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:47.856629 | orchestrator | 2025-06-02 13:29:47 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:47.858725 | orchestrator | 2025-06-02 13:29:47 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:47.859637 | orchestrator | 2025-06-02 13:29:47 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:47.861383 | orchestrator | 2025-06-02 13:29:47 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:47.861516 | orchestrator | 2025-06-02 13:29:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:50.898237 | orchestrator | 2025-06-02 13:29:50 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:50.898444 | orchestrator | 2025-06-02 13:29:50 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:50.899706 | orchestrator | 2025-06-02 13:29:50 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:50.901707 | orchestrator | 2025-06-02 13:29:50 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:50.902559 | orchestrator | 2025-06-02 13:29:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:53.954857 | orchestrator | 2025-06-02 13:29:53 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:53.961847 | orchestrator | 2025-06-02 13:29:53 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:53.970077 | orchestrator | 2025-06-02 13:29:53 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:53.973490 | orchestrator | 2025-06-02 13:29:53 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:53.974005 | orchestrator | 2025-06-02 13:29:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:29:57.026742 | orchestrator | 2025-06-02 13:29:57 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:29:57.029531 | orchestrator | 2025-06-02 13:29:57 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:29:57.034249 | orchestrator | 2025-06-02 13:29:57 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:29:57.034286 | orchestrator | 2025-06-02 13:29:57 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:29:57.034331 | orchestrator | 2025-06-02 13:29:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:00.067080 | orchestrator | 2025-06-02 13:30:00 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:00.069393 | orchestrator | 2025-06-02 13:30:00 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:00.071276 | orchestrator | 2025-06-02 13:30:00 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:00.073197 | orchestrator | 2025-06-02 13:30:00 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:00.073227 | orchestrator | 2025-06-02 13:30:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:03.115585 | orchestrator | 2025-06-02 13:30:03 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:03.116846 | orchestrator | 2025-06-02 13:30:03 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:03.119764 | orchestrator | 2025-06-02 13:30:03 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:03.122283 | orchestrator | 2025-06-02 13:30:03 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:03.122858 | orchestrator | 2025-06-02 13:30:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:06.160516 | orchestrator | 2025-06-02 13:30:06 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:06.161202 | orchestrator | 2025-06-02 13:30:06 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:06.164147 | orchestrator | 2025-06-02 13:30:06 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:06.164201 | orchestrator | 2025-06-02 13:30:06 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:06.164223 | orchestrator | 2025-06-02 13:30:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:09.213854 | orchestrator | 2025-06-02 13:30:09 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:09.214277 | orchestrator | 2025-06-02 13:30:09 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:09.217027 | orchestrator | 2025-06-02 13:30:09 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:09.217782 | orchestrator | 2025-06-02 13:30:09 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:09.217822 | orchestrator | 2025-06-02 13:30:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:12.263496 | orchestrator | 2025-06-02 13:30:12 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:12.267426 | orchestrator | 2025-06-02 13:30:12 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:12.271678 | orchestrator | 2025-06-02 13:30:12 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:12.276259 | orchestrator | 2025-06-02 13:30:12 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:12.276461 | orchestrator | 2025-06-02 13:30:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:15.327811 | orchestrator | 2025-06-02 13:30:15 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:15.327942 | orchestrator | 2025-06-02 13:30:15 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:15.331660 | orchestrator | 2025-06-02 13:30:15 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:15.332575 | orchestrator | 2025-06-02 13:30:15 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:15.332865 | orchestrator | 2025-06-02 13:30:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:18.368734 | orchestrator | 2025-06-02 13:30:18 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:18.371443 | orchestrator | 2025-06-02 13:30:18 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:18.374204 | orchestrator | 2025-06-02 13:30:18 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:18.381296 | orchestrator | 2025-06-02 13:30:18 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:18.381689 | orchestrator | 2025-06-02 13:30:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:21.418307 | orchestrator | 2025-06-02 13:30:21 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:21.418437 | orchestrator | 2025-06-02 13:30:21 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:21.418815 | orchestrator | 2025-06-02 13:30:21 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:21.421419 | orchestrator | 2025-06-02 13:30:21 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:21.421443 | orchestrator | 2025-06-02 13:30:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:24.465889 | orchestrator | 2025-06-02 13:30:24 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:24.466173 | orchestrator | 2025-06-02 13:30:24 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:24.467025 | orchestrator | 2025-06-02 13:30:24 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:24.468521 | orchestrator | 2025-06-02 13:30:24 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:24.469182 | orchestrator | 2025-06-02 13:30:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:27.510130 | orchestrator | 2025-06-02 13:30:27 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:27.510600 | orchestrator | 2025-06-02 13:30:27 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:27.511480 | orchestrator | 2025-06-02 13:30:27 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:27.512280 | orchestrator | 2025-06-02 13:30:27 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:27.512309 | orchestrator | 2025-06-02 13:30:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:30.540904 | orchestrator | 2025-06-02 13:30:30 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:30.540997 | orchestrator | 2025-06-02 13:30:30 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:30.541013 | orchestrator | 2025-06-02 13:30:30 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:30.541668 | orchestrator | 2025-06-02 13:30:30 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:30.541717 | orchestrator | 2025-06-02 13:30:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:33.587392 | orchestrator | 2025-06-02 13:30:33 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:33.592600 | orchestrator | 2025-06-02 13:30:33 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:33.596721 | orchestrator | 2025-06-02 13:30:33 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:33.600481 | orchestrator | 2025-06-02 13:30:33 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:33.601738 | orchestrator | 2025-06-02 13:30:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:36.641844 | orchestrator | 2025-06-02 13:30:36 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:36.643188 | orchestrator | 2025-06-02 13:30:36 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:36.643240 | orchestrator | 2025-06-02 13:30:36 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:36.643254 | orchestrator | 2025-06-02 13:30:36 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:36.643266 | orchestrator | 2025-06-02 13:30:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:39.675094 | orchestrator | 2025-06-02 13:30:39 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:39.675181 | orchestrator | 2025-06-02 13:30:39 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:39.675717 | orchestrator | 2025-06-02 13:30:39 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:39.676728 | orchestrator | 2025-06-02 13:30:39 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:39.676748 | orchestrator | 2025-06-02 13:30:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:42.722182 | orchestrator | 2025-06-02 13:30:42 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:42.723369 | orchestrator | 2025-06-02 13:30:42 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:42.724725 | orchestrator | 2025-06-02 13:30:42 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:42.726382 | orchestrator | 2025-06-02 13:30:42 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:42.726408 | orchestrator | 2025-06-02 13:30:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:45.777677 | orchestrator | 2025-06-02 13:30:45 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:45.778360 | orchestrator | 2025-06-02 13:30:45 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:45.780730 | orchestrator | 2025-06-02 13:30:45 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:45.783253 | orchestrator | 2025-06-02 13:30:45 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:45.783284 | orchestrator | 2025-06-02 13:30:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:48.852441 | orchestrator | 2025-06-02 13:30:48 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:48.853114 | orchestrator | 2025-06-02 13:30:48 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state STARTED 2025-06-02 13:30:48.854705 | orchestrator | 2025-06-02 13:30:48 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:48.855940 | orchestrator | 2025-06-02 13:30:48 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:48.855968 | orchestrator | 2025-06-02 13:30:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:51.917041 | orchestrator | 2025-06-02 13:30:51 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:51.919633 | orchestrator | 2025-06-02 13:30:51 | INFO  | Task 9ea8313d-ead4-4bce-b7e5-d93f45fff597 is in state SUCCESS 2025-06-02 13:30:51.921080 | orchestrator | 2025-06-02 13:30:51 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:51.923018 | orchestrator | 2025-06-02 13:30:51.923057 | orchestrator | 2025-06-02 13:30:51.923070 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:30:51.923082 | orchestrator | 2025-06-02 13:30:51.923108 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:30:51.923120 | orchestrator | Monday 02 June 2025 13:28:00 +0000 (0:00:00.210) 0:00:00.210 *********** 2025-06-02 13:30:51.923132 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:30:51.923145 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:30:51.923156 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:30:51.923167 | orchestrator | 2025-06-02 13:30:51.923178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:30:51.923189 | orchestrator | Monday 02 June 2025 13:28:01 +0000 (0:00:00.222) 0:00:00.433 *********** 2025-06-02 13:30:51.923200 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 13:30:51.923211 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 13:30:51.923221 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 13:30:51.923232 | orchestrator | 2025-06-02 13:30:51.923243 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 13:30:51.923253 | orchestrator | 2025-06-02 13:30:51.923304 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 13:30:51.923316 | orchestrator | Monday 02 June 2025 13:28:01 +0000 (0:00:00.386) 0:00:00.819 *********** 2025-06-02 13:30:51.923359 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:30:51.923376 | orchestrator | 2025-06-02 13:30:51.923387 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 13:30:51.923398 | orchestrator | Monday 02 June 2025 13:28:02 +0000 (0:00:01.212) 0:00:02.032 *********** 2025-06-02 13:30:51.923409 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 13:30:51.923419 | orchestrator | 2025-06-02 13:30:51.923430 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 13:30:51.923441 | orchestrator | Monday 02 June 2025 13:28:06 +0000 (0:00:03.497) 0:00:05.530 *********** 2025-06-02 13:30:51.923452 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 13:30:51.923463 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 13:30:51.923474 | orchestrator | 2025-06-02 13:30:51.923484 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 13:30:51.923495 | orchestrator | Monday 02 June 2025 13:28:11 +0000 (0:00:05.329) 0:00:10.859 *********** 2025-06-02 13:30:51.923506 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 13:30:51.923517 | orchestrator | 2025-06-02 13:30:51.923527 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 13:30:51.923538 | orchestrator | Monday 02 June 2025 13:28:14 +0000 (0:00:02.932) 0:00:13.792 *********** 2025-06-02 13:30:51.923549 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:30:51.923585 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 13:30:51.923597 | orchestrator | 2025-06-02 13:30:51.923610 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 13:30:51.923622 | orchestrator | Monday 02 June 2025 13:28:18 +0000 (0:00:03.624) 0:00:17.416 *********** 2025-06-02 13:30:51.923635 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:30:51.923646 | orchestrator | 2025-06-02 13:30:51.923659 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 13:30:51.923671 | orchestrator | Monday 02 June 2025 13:28:21 +0000 (0:00:03.183) 0:00:20.600 *********** 2025-06-02 13:30:51.923686 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 13:30:51.923705 | orchestrator | 2025-06-02 13:30:51.923723 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 13:30:51.923740 | orchestrator | Monday 02 June 2025 13:28:25 +0000 (0:00:03.705) 0:00:24.305 *********** 2025-06-02 13:30:51.923794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.923823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.923857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.923871 | orchestrator | 2025-06-02 13:30:51.923882 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 13:30:51.923893 | orchestrator | Monday 02 June 2025 13:28:29 +0000 (0:00:04.853) 0:00:29.159 *********** 2025-06-02 13:30:51.923911 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:30:51.923923 | orchestrator | 2025-06-02 13:30:51.923934 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 13:30:51.923950 | orchestrator | Monday 02 June 2025 13:28:30 +0000 (0:00:00.546) 0:00:29.705 *********** 2025-06-02 13:30:51.923961 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:30:51.923972 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:30:51.923983 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.923993 | orchestrator | 2025-06-02 13:30:51.924004 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 13:30:51.924015 | orchestrator | Monday 02 June 2025 13:28:33 +0000 (0:00:03.501) 0:00:33.207 *********** 2025-06-02 13:30:51.924026 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:30:51.924037 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:30:51.924047 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:30:51.924058 | orchestrator | 2025-06-02 13:30:51.924069 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 13:30:51.924080 | orchestrator | Monday 02 June 2025 13:28:35 +0000 (0:00:01.369) 0:00:34.577 *********** 2025-06-02 13:30:51.924090 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:30:51.924108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:30:51.924119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:30:51.924130 | orchestrator | 2025-06-02 13:30:51.924140 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 13:30:51.924151 | orchestrator | Monday 02 June 2025 13:28:36 +0000 (0:00:01.000) 0:00:35.577 *********** 2025-06-02 13:30:51.924162 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:30:51.924172 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:30:51.924183 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:30:51.924194 | orchestrator | 2025-06-02 13:30:51.924204 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 13:30:51.924215 | orchestrator | Monday 02 June 2025 13:28:36 +0000 (0:00:00.673) 0:00:36.250 *********** 2025-06-02 13:30:51.924226 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.924237 | orchestrator | 2025-06-02 13:30:51.924247 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 13:30:51.924258 | orchestrator | Monday 02 June 2025 13:28:37 +0000 (0:00:00.124) 0:00:36.375 *********** 2025-06-02 13:30:51.924269 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.924280 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.924290 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.924301 | orchestrator | 2025-06-02 13:30:51.924312 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 13:30:51.924348 | orchestrator | Monday 02 June 2025 13:28:37 +0000 (0:00:00.250) 0:00:36.626 *********** 2025-06-02 13:30:51.924361 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:30:51.924372 | orchestrator | 2025-06-02 13:30:51.924383 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 13:30:51.924393 | orchestrator | Monday 02 June 2025 13:28:37 +0000 (0:00:00.462) 0:00:37.088 *********** 2025-06-02 13:30:51.924412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.924431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.924451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.924463 | orchestrator | 2025-06-02 13:30:51.924474 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 13:30:51.924485 | orchestrator | Monday 02 June 2025 13:28:43 +0000 (0:00:05.747) 0:00:42.836 *********** 2025-06-02 13:30:51.924510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:30:51.924530 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.924542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:30:51.924554 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.924579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:30:51.924598 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.924610 | orchestrator | 2025-06-02 13:30:51.924620 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 13:30:51.924631 | orchestrator | Monday 02 June 2025 13:28:47 +0000 (0:00:03.972) 0:00:46.808 *********** 2025-06-02 13:30:51.924642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:30:51.924654 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.924677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:30:51.924696 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.924707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 13:30:51.924719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.924730 | orchestrator | 2025-06-02 13:30:51.924740 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 13:30:51.924751 | orchestrator | Monday 02 June 2025 13:28:51 +0000 (0:00:03.563) 0:00:50.371 *********** 2025-06-02 13:30:51.924762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.924772 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.924783 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.924794 | orchestrator | 2025-06-02 13:30:51.924804 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 13:30:51.924815 | orchestrator | Monday 02 June 2025 13:28:54 +0000 (0:00:03.184) 0:00:53.556 *********** 2025-06-02 13:30:51.924843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.924864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.924876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.924895 | orchestrator | 2025-06-02 13:30:51.924906 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 13:30:51.924917 | orchestrator | Monday 02 June 2025 13:28:58 +0000 (0:00:03.905) 0:00:57.461 *********** 2025-06-02 13:30:51.924927 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.924938 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:30:51.924949 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:30:51.924961 | orchestrator | 2025-06-02 13:30:51.924979 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 13:30:51.925205 | orchestrator | Monday 02 June 2025 13:29:06 +0000 (0:00:07.846) 0:01:05.308 *********** 2025-06-02 13:30:51.925232 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.925251 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.925275 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.925294 | orchestrator | 2025-06-02 13:30:51.925311 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 13:30:51.925375 | orchestrator | Monday 02 June 2025 13:29:12 +0000 (0:00:06.186) 0:01:11.495 *********** 2025-06-02 13:30:51.925396 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.925413 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.925430 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.925447 | orchestrator | 2025-06-02 13:30:51.925464 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 13:30:51.925482 | orchestrator | Monday 02 June 2025 13:29:18 +0000 (0:00:06.349) 0:01:17.845 *********** 2025-06-02 13:30:51.925500 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.925518 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.925535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.925552 | orchestrator | 2025-06-02 13:30:51.925570 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 13:30:51.925587 | orchestrator | Monday 02 June 2025 13:29:23 +0000 (0:00:04.615) 0:01:22.460 *********** 2025-06-02 13:30:51.925605 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.925622 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.925640 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.925657 | orchestrator | 2025-06-02 13:30:51.925705 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 13:30:51.925723 | orchestrator | Monday 02 June 2025 13:29:27 +0000 (0:00:04.173) 0:01:26.634 *********** 2025-06-02 13:30:51.925740 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.925758 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.925775 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.925792 | orchestrator | 2025-06-02 13:30:51.925811 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 13:30:51.925829 | orchestrator | Monday 02 June 2025 13:29:27 +0000 (0:00:00.328) 0:01:26.962 *********** 2025-06-02 13:30:51.925847 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 13:30:51.925868 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.925887 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 13:30:51.925906 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.925925 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 13:30:51.925944 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.925963 | orchestrator | 2025-06-02 13:30:51.925980 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 13:30:51.925999 | orchestrator | Monday 02 June 2025 13:29:30 +0000 (0:00:03.205) 0:01:30.168 *********** 2025-06-02 13:30:51.926086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.926167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.926189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 13:30:51.926223 | orchestrator | 2025-06-02 13:30:51.926244 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 13:30:51.926264 | orchestrator | Monday 02 June 2025 13:29:34 +0000 (0:00:03.504) 0:01:33.672 *********** 2025-06-02 13:30:51.926285 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:30:51.926302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:30:51.926321 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:30:51.926408 | orchestrator | 2025-06-02 13:30:51.926426 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 13:30:51.926445 | orchestrator | Monday 02 June 2025 13:29:34 +0000 (0:00:00.242) 0:01:33.915 *********** 2025-06-02 13:30:51.926462 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.926481 | orchestrator | 2025-06-02 13:30:51.926500 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 13:30:51.926519 | orchestrator | Monday 02 June 2025 13:29:36 +0000 (0:00:01.937) 0:01:35.852 *********** 2025-06-02 13:30:51.926538 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.926552 | orchestrator | 2025-06-02 13:30:51.926563 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 13:30:51.926574 | orchestrator | Monday 02 June 2025 13:29:38 +0000 (0:00:01.916) 0:01:37.769 *********** 2025-06-02 13:30:51.926584 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.926598 | orchestrator | 2025-06-02 13:30:51.926617 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 13:30:51.926648 | orchestrator | Monday 02 June 2025 13:29:40 +0000 (0:00:01.862) 0:01:39.631 *********** 2025-06-02 13:30:51.926666 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.926683 | orchestrator | 2025-06-02 13:30:51.926710 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 13:30:51.926728 | orchestrator | Monday 02 June 2025 13:30:10 +0000 (0:00:30.479) 0:02:10.111 *********** 2025-06-02 13:30:51.926745 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.926763 | orchestrator | 2025-06-02 13:30:51.926781 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 13:30:51.926797 | orchestrator | Monday 02 June 2025 13:30:13 +0000 (0:00:02.831) 0:02:12.943 *********** 2025-06-02 13:30:51.926812 | orchestrator | 2025-06-02 13:30:51.926827 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 13:30:51.926843 | orchestrator | Monday 02 June 2025 13:30:13 +0000 (0:00:00.152) 0:02:13.095 *********** 2025-06-02 13:30:51.926860 | orchestrator | 2025-06-02 13:30:51.926876 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 13:30:51.926893 | orchestrator | Monday 02 June 2025 13:30:13 +0000 (0:00:00.146) 0:02:13.241 *********** 2025-06-02 13:30:51.926909 | orchestrator | 2025-06-02 13:30:51.926925 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 13:30:51.926941 | orchestrator | Monday 02 June 2025 13:30:14 +0000 (0:00:00.228) 0:02:13.470 *********** 2025-06-02 13:30:51.926969 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:30:51.926985 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:30:51.927001 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:30:51.927017 | orchestrator | 2025-06-02 13:30:51.927032 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:30:51.927049 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 13:30:51.927067 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 13:30:51.927083 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 13:30:51.927099 | orchestrator | 2025-06-02 13:30:51.927116 | orchestrator | 2025-06-02 13:30:51.927132 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:30:51.927148 | orchestrator | Monday 02 June 2025 13:30:50 +0000 (0:00:36.571) 0:02:50.041 *********** 2025-06-02 13:30:51.927164 | orchestrator | =============================================================================== 2025-06-02 13:30:51.927180 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.57s 2025-06-02 13:30:51.927197 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.48s 2025-06-02 13:30:51.927212 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.85s 2025-06-02 13:30:51.927227 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.35s 2025-06-02 13:30:51.927243 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.19s 2025-06-02 13:30:51.927259 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.75s 2025-06-02 13:30:51.927274 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.33s 2025-06-02 13:30:51.927289 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.85s 2025-06-02 13:30:51.927305 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.62s 2025-06-02 13:30:51.927321 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.17s 2025-06-02 13:30:51.927411 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.97s 2025-06-02 13:30:51.927429 | orchestrator | glance : Copying over config.json files for services -------------------- 3.91s 2025-06-02 13:30:51.927445 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.71s 2025-06-02 13:30:51.927461 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.62s 2025-06-02 13:30:51.927476 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.56s 2025-06-02 13:30:51.927493 | orchestrator | glance : Check glance containers ---------------------------------------- 3.50s 2025-06-02 13:30:51.927509 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.50s 2025-06-02 13:30:51.927525 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.50s 2025-06-02 13:30:51.927541 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.21s 2025-06-02 13:30:51.927558 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.18s 2025-06-02 13:30:51.927574 | orchestrator | 2025-06-02 13:30:51 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:51.927591 | orchestrator | 2025-06-02 13:30:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:54.979807 | orchestrator | 2025-06-02 13:30:54 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:54.981182 | orchestrator | 2025-06-02 13:30:54 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:54.982446 | orchestrator | 2025-06-02 13:30:54 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:30:54.985598 | orchestrator | 2025-06-02 13:30:54 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:54.985849 | orchestrator | 2025-06-02 13:30:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:30:58.042420 | orchestrator | 2025-06-02 13:30:58 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:30:58.043889 | orchestrator | 2025-06-02 13:30:58 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:30:58.049498 | orchestrator | 2025-06-02 13:30:58 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:30:58.049699 | orchestrator | 2025-06-02 13:30:58 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:30:58.049807 | orchestrator | 2025-06-02 13:30:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:01.092225 | orchestrator | 2025-06-02 13:31:01 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:01.092387 | orchestrator | 2025-06-02 13:31:01 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:31:01.093059 | orchestrator | 2025-06-02 13:31:01 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:01.097188 | orchestrator | 2025-06-02 13:31:01 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:01.098142 | orchestrator | 2025-06-02 13:31:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:04.151161 | orchestrator | 2025-06-02 13:31:04 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:04.153256 | orchestrator | 2025-06-02 13:31:04 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:31:04.154686 | orchestrator | 2025-06-02 13:31:04 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:04.156855 | orchestrator | 2025-06-02 13:31:04 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:04.156946 | orchestrator | 2025-06-02 13:31:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:07.208799 | orchestrator | 2025-06-02 13:31:07 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:07.210788 | orchestrator | 2025-06-02 13:31:07 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state STARTED 2025-06-02 13:31:07.212876 | orchestrator | 2025-06-02 13:31:07 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:07.214610 | orchestrator | 2025-06-02 13:31:07 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:07.214642 | orchestrator | 2025-06-02 13:31:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:10.268624 | orchestrator | 2025-06-02 13:31:10 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:10.271308 | orchestrator | 2025-06-02 13:31:10 | INFO  | Task 90e5fda2-f7ea-4476-90cc-3d0da2e2e422 is in state SUCCESS 2025-06-02 13:31:10.272064 | orchestrator | 2025-06-02 13:31:10.273951 | orchestrator | 2025-06-02 13:31:10.273987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:31:10.274000 | orchestrator | 2025-06-02 13:31:10.274011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:31:10.274077 | orchestrator | Monday 02 June 2025 13:27:54 +0000 (0:00:00.274) 0:00:00.274 *********** 2025-06-02 13:31:10.274089 | orchestrator | ok: [testbed-manager] 2025-06-02 13:31:10.274101 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:31:10.274139 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:31:10.274150 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:31:10.274161 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:31:10.274171 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:31:10.274181 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:31:10.274192 | orchestrator | 2025-06-02 13:31:10.274203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:31:10.274214 | orchestrator | Monday 02 June 2025 13:27:54 +0000 (0:00:00.814) 0:00:01.088 *********** 2025-06-02 13:31:10.274387 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 13:31:10.274403 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 13:31:10.274414 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 13:31:10.274425 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 13:31:10.274436 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 13:31:10.274446 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 13:31:10.274457 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 13:31:10.274468 | orchestrator | 2025-06-02 13:31:10.274479 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 13:31:10.274490 | orchestrator | 2025-06-02 13:31:10.274501 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 13:31:10.274512 | orchestrator | Monday 02 June 2025 13:27:55 +0000 (0:00:00.773) 0:00:01.862 *********** 2025-06-02 13:31:10.274524 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:31:10.274536 | orchestrator | 2025-06-02 13:31:10.274563 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 13:31:10.274576 | orchestrator | Monday 02 June 2025 13:27:56 +0000 (0:00:01.261) 0:00:03.124 *********** 2025-06-02 13:31:10.274594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.274612 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 13:31:10.274627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.274641 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.274982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.275012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275032 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.275057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.275069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.275228 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 13:31:10.275242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275308 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275792 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.275909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.275945 | orchestrator | 2025-06-02 13:31:10.275962 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 13:31:10.275974 | orchestrator | Monday 02 June 2025 13:27:59 +0000 (0:00:03.001) 0:00:06.126 *********** 2025-06-02 13:31:10.275985 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:31:10.276030 | orchestrator | 2025-06-02 13:31:10.276041 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 13:31:10.276052 | orchestrator | Monday 02 June 2025 13:28:01 +0000 (0:00:01.310) 0:00:07.437 *********** 2025-06-02 13:31:10.276064 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 13:31:10.276084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.276096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.276136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.276149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.276161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.277460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.277501 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.277514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.277540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.277552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.277690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.277710 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.277721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.277738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.277750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.277771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.277782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.277868 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 13:31:10.277940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.277956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.277973 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.277994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.278006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.278048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.278100 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.278114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.278126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.278147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.278159 | orchestrator | 2025-06-02 13:31:10.278171 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 13:31:10.278216 | orchestrator | Monday 02 June 2025 13:28:07 +0000 (0:00:05.740) 0:00:13.177 *********** 2025-06-02 13:31:10.278228 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 13:31:10.278240 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278252 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278299 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 13:31:10.278314 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278325 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.278381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278446 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.278493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278564 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.278575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.278683 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.278701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278741 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.278754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278826 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.278839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.278892 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.278904 | orchestrator | 2025-06-02 13:31:10.278916 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 13:31:10.278929 | orchestrator | Monday 02 June 2025 13:28:08 +0000 (0:00:01.536) 0:00:14.713 *********** 2025-06-02 13:31:10.278943 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 13:31:10.278955 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.278967 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279010 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 13:31:10.279030 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.279058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279137 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.279148 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.279159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.279178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.279216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 13:31:10.279354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.279367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279378 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.279389 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.279400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279412 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.279423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.279467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279499 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.279510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 13:31:10.279526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 13:31:10.279548 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.279559 | orchestrator | 2025-06-02 13:31:10.279569 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 13:31:10.279580 | orchestrator | Monday 02 June 2025 13:28:10 +0000 (0:00:01.921) 0:00:16.635 *********** 2025-06-02 13:31:10.279591 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 13:31:10.279603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.279650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.279664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.279675 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.279691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.279703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.279714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.279725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.279743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.279786 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.279800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.279812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.279844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.279867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.279879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.279890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.279943 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 13:31:10.279958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.279969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.279986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.279998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.280009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.280027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.280069 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.280082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.280093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.280110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.280121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.280133 | orchestrator | 2025-06-02 13:31:10.280144 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 13:31:10.280155 | orchestrator | Monday 02 June 2025 13:28:15 +0000 (0:00:05.027) 0:00:21.663 *********** 2025-06-02 13:31:10.280165 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:31:10.280176 | orchestrator | 2025-06-02 13:31:10.280187 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 13:31:10.280198 | orchestrator | Monday 02 June 2025 13:28:16 +0000 (0:00:00.742) 0:00:22.405 *********** 2025-06-02 13:31:10.280216 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061430, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280227 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061430, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280269 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061430, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280282 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061430, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.280294 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061410, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280310 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061430, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280321 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061430, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280400 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061410, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280414 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061430, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280458 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061410, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280472 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061360, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280483 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061410, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280499 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061410, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280511 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061360, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280532 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061410, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061360, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280586 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061362, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280600 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061362, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280611 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061360, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280622 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061360, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280641 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1061410, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.280761 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061360, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280787 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061362, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280838 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061408, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280852 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061408, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280862 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061362, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280876 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061362, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280893 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061367, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9708965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280903 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061408, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280913 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061362, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280949 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061367, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9708965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280961 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061403, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280971 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061408, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.280985 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061367, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9708965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281001 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061408, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281011 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061415, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281021 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1061360, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.281057 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061408, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281068 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061403, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061367, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9708965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281093 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061428, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281109 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061403, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281118 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061367, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9708965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281128 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061367, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9708965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281163 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061403, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281175 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061415, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061403, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281205 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1061362, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.281215 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061415, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281225 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061459, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281235 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061415, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281245 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061403, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281281 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061415, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281293 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061428, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281313 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061419, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281323 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061428, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281359 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061415, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281376 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061459, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281395 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061428, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281449 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061364, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281469 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1061408, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.281504 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061428, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281522 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061459, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281539 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061376, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281555 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061459, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281566 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061428, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281608 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061459, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281621 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061419, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281645 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061419, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281656 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061359, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061419, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281675 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061459, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281685 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061364, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281724 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061419, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281742 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061409, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281757 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1061367, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9708965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.281767 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061364, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281777 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061419, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281787 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061364, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281797 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061454, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281814 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061376, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281830 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061364, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281844 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061364, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281854 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061376, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281864 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061376, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281874 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061376, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281883 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061359, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281901 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061370, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9718966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281917 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061359, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281931 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061376, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281941 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1061403, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.281951 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061434, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281961 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.281971 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061359, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.281982 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061359, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282002 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061409, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282013 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061409, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282077 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061409, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282088 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1061415, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9808967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282098 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061359, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282108 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061454, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282118 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061454, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282141 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061454, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282152 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061409, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282166 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061370, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9718966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282176 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061409, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282186 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061370, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9718966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282196 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1061428, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282206 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061370, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9718966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282227 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061454, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282238 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061434, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282247 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.282262 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061434, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282272 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.282282 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061454, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282292 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061370, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9718966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282302 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061434, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282317 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.282327 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061434, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282391 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.282408 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061370, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9718966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282419 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1061459, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282434 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061434, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 13:31:10.282444 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.282454 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1061419, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9818966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282464 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061364, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282474 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1061376, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9788966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282493 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1061359, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9688966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282508 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1061409, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9798968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282519 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1061454, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9858968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282533 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1061370, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9718966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282543 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1061434, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9828968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 13:31:10.282553 | orchestrator | 2025-06-02 13:31:10.282563 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 13:31:10.282573 | orchestrator | Monday 02 June 2025 13:28:36 +0000 (0:00:20.710) 0:00:43.116 *********** 2025-06-02 13:31:10.282583 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:31:10.282592 | orchestrator | 2025-06-02 13:31:10.282602 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 13:31:10.282621 | orchestrator | Monday 02 June 2025 13:28:37 +0000 (0:00:00.606) 0:00:43.723 *********** 2025-06-02 13:31:10.282631 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.282641 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282651 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 13:31:10.282661 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282670 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 13:31:10.282680 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:31:10.282689 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.282699 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282708 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 13:31:10.282718 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282728 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 13:31:10.282737 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:31:10.282747 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.282756 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282766 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 13:31:10.282775 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282785 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 13:31:10.282794 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.282804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282813 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 13:31:10.282823 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282833 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 13:31:10.282842 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.282857 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282867 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 13:31:10.282876 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282886 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 13:31:10.282896 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.282905 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282912 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 13:31:10.282920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282928 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 13:31:10.282936 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.282944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282951 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 13:31:10.282959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 13:31:10.282967 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 13:31:10.282975 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 13:31:10.282983 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 13:31:10.282990 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:31:10.282998 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 13:31:10.283006 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 13:31:10.283014 | orchestrator | 2025-06-02 13:31:10.283022 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 13:31:10.283030 | orchestrator | Monday 02 June 2025 13:28:40 +0000 (0:00:02.571) 0:00:46.294 *********** 2025-06-02 13:31:10.283043 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 13:31:10.283051 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.283062 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 13:31:10.283071 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.283078 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 13:31:10.283086 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.283094 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 13:31:10.283102 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.283110 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 13:31:10.283117 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.283125 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 13:31:10.283133 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.283141 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 13:31:10.283149 | orchestrator | 2025-06-02 13:31:10.283156 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 13:31:10.283164 | orchestrator | Monday 02 June 2025 13:28:58 +0000 (0:00:18.035) 0:01:04.329 *********** 2025-06-02 13:31:10.283172 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 13:31:10.283180 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.283187 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 13:31:10.283195 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.283203 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 13:31:10.283211 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.283219 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 13:31:10.283226 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.283234 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 13:31:10.283242 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.283250 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 13:31:10.283257 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.283265 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 13:31:10.283273 | orchestrator | 2025-06-02 13:31:10.283281 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 13:31:10.283289 | orchestrator | Monday 02 June 2025 13:29:01 +0000 (0:00:03.765) 0:01:08.094 *********** 2025-06-02 13:31:10.283297 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 13:31:10.283305 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 13:31:10.283313 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 13:31:10.283321 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.283329 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.283350 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.283362 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 13:31:10.283376 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.283384 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 13:31:10.283392 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.283400 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 13:31:10.283408 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 13:31:10.283415 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.283423 | orchestrator | 2025-06-02 13:31:10.283431 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 13:31:10.283439 | orchestrator | Monday 02 June 2025 13:29:04 +0000 (0:00:02.156) 0:01:10.251 *********** 2025-06-02 13:31:10.283447 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:31:10.283455 | orchestrator | 2025-06-02 13:31:10.283462 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 13:31:10.283470 | orchestrator | Monday 02 June 2025 13:29:04 +0000 (0:00:00.853) 0:01:11.105 *********** 2025-06-02 13:31:10.283478 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.283486 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.283494 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.283502 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.283509 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.283517 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.283525 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.283533 | orchestrator | 2025-06-02 13:31:10.283541 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 13:31:10.283548 | orchestrator | Monday 02 June 2025 13:29:05 +0000 (0:00:00.591) 0:01:11.696 *********** 2025-06-02 13:31:10.283556 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.283567 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.283575 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:10.283583 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.283591 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.283599 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:10.283606 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:10.283614 | orchestrator | 2025-06-02 13:31:10.283622 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 13:31:10.283630 | orchestrator | Monday 02 June 2025 13:29:08 +0000 (0:00:02.890) 0:01:14.587 *********** 2025-06-02 13:31:10.283638 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 13:31:10.283646 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.283654 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 13:31:10.283661 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.283669 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 13:31:10.283677 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.283685 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 13:31:10.283693 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.283701 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 13:31:10.283708 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.283716 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 13:31:10.283724 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.283732 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 13:31:10.283740 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.283752 | orchestrator | 2025-06-02 13:31:10.283760 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 13:31:10.283768 | orchestrator | Monday 02 June 2025 13:29:11 +0000 (0:00:02.936) 0:01:17.524 *********** 2025-06-02 13:31:10.283776 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 13:31:10.283784 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.283792 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 13:31:10.283800 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.283807 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 13:31:10.283815 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.283823 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 13:31:10.283831 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.283839 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 13:31:10.283847 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 13:31:10.283854 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.283862 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 13:31:10.283875 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.283883 | orchestrator | 2025-06-02 13:31:10.283891 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 13:31:10.283898 | orchestrator | Monday 02 June 2025 13:29:14 +0000 (0:00:03.097) 0:01:20.621 *********** 2025-06-02 13:31:10.283906 | orchestrator | [WARNING]: Skipped 2025-06-02 13:31:10.283914 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 13:31:10.283922 | orchestrator | due to this access issue: 2025-06-02 13:31:10.283930 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 13:31:10.283937 | orchestrator | not a directory 2025-06-02 13:31:10.283945 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 13:31:10.283953 | orchestrator | 2025-06-02 13:31:10.283961 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 13:31:10.283968 | orchestrator | Monday 02 June 2025 13:29:16 +0000 (0:00:01.603) 0:01:22.224 *********** 2025-06-02 13:31:10.283976 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.283984 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.283992 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.283999 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.284007 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.284015 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.284023 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.284030 | orchestrator | 2025-06-02 13:31:10.284038 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 13:31:10.284046 | orchestrator | Monday 02 June 2025 13:29:17 +0000 (0:00:01.585) 0:01:23.810 *********** 2025-06-02 13:31:10.284054 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.284061 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:10.284069 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:10.284077 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:10.284085 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:10.284092 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:10.284100 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:10.284108 | orchestrator | 2025-06-02 13:31:10.284116 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 13:31:10.284124 | orchestrator | Monday 02 June 2025 13:29:18 +0000 (0:00:00.945) 0:01:24.756 *********** 2025-06-02 13:31:10.284141 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 13:31:10.284150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.284158 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.284167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.284180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.284188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.284197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284213 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284222 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.284238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 13:31:10.284259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284281 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 13:31:10.284296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284321 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284348 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 13:31:10.284432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 13:31:10.284454 | orchestrator | 2025-06-02 13:31:10.284462 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 13:31:10.284470 | orchestrator | Monday 02 June 2025 13:29:22 +0000 (0:00:04.278) 0:01:29.034 *********** 2025-06-02 13:31:10.284478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 13:31:10.284490 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:31:10.284498 | orchestrator | 2025-06-02 13:31:10.284506 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 13:31:10.284514 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:01.280) 0:01:30.315 *********** 2025-06-02 13:31:10.284521 | orchestrator | 2025-06-02 13:31:10.284529 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 13:31:10.284537 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:00.069) 0:01:30.384 *********** 2025-06-02 13:31:10.284545 | orchestrator | 2025-06-02 13:31:10.284553 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 13:31:10.284561 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:00.060) 0:01:30.444 *********** 2025-06-02 13:31:10.284569 | orchestrator | 2025-06-02 13:31:10.284576 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 13:31:10.284584 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:00.059) 0:01:30.504 *********** 2025-06-02 13:31:10.284592 | orchestrator | 2025-06-02 13:31:10.284600 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 13:31:10.284608 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:00.060) 0:01:30.564 *********** 2025-06-02 13:31:10.284616 | orchestrator | 2025-06-02 13:31:10.284623 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 13:31:10.284631 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:00.183) 0:01:30.747 *********** 2025-06-02 13:31:10.284639 | orchestrator | 2025-06-02 13:31:10.284647 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 13:31:10.284654 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:00.066) 0:01:30.814 *********** 2025-06-02 13:31:10.284662 | orchestrator | 2025-06-02 13:31:10.284670 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 13:31:10.284678 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:00.072) 0:01:30.887 *********** 2025-06-02 13:31:10.284686 | orchestrator | changed: [testbed-manager] 2025-06-02 13:31:10.284693 | orchestrator | 2025-06-02 13:31:10.284701 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 13:31:10.284709 | orchestrator | Monday 02 June 2025 13:29:41 +0000 (0:00:16.664) 0:01:47.551 *********** 2025-06-02 13:31:10.284717 | orchestrator | changed: [testbed-manager] 2025-06-02 13:31:10.284725 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:31:10.284732 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:10.284740 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:10.284748 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:31:10.284756 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:10.284764 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:31:10.284771 | orchestrator | 2025-06-02 13:31:10.284779 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 13:31:10.284787 | orchestrator | Monday 02 June 2025 13:29:55 +0000 (0:00:14.279) 0:02:01.830 *********** 2025-06-02 13:31:10.284800 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:10.284808 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:10.284816 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:10.284823 | orchestrator | 2025-06-02 13:31:10.284831 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 13:31:10.284839 | orchestrator | Monday 02 June 2025 13:30:06 +0000 (0:00:10.635) 0:02:12.466 *********** 2025-06-02 13:31:10.284847 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:10.284854 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:10.284862 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:10.284870 | orchestrator | 2025-06-02 13:31:10.284878 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 13:31:10.284886 | orchestrator | Monday 02 June 2025 13:30:12 +0000 (0:00:06.051) 0:02:18.518 *********** 2025-06-02 13:31:10.284894 | orchestrator | changed: [testbed-manager] 2025-06-02 13:31:10.284905 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:10.284913 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:31:10.284921 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:10.284929 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:31:10.284937 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:10.284944 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:31:10.284952 | orchestrator | 2025-06-02 13:31:10.284960 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 13:31:10.284968 | orchestrator | Monday 02 June 2025 13:30:27 +0000 (0:00:15.226) 0:02:33.744 *********** 2025-06-02 13:31:10.284976 | orchestrator | changed: [testbed-manager] 2025-06-02 13:31:10.284983 | orchestrator | 2025-06-02 13:31:10.284991 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 13:31:10.284999 | orchestrator | Monday 02 June 2025 13:30:36 +0000 (0:00:08.927) 0:02:42.672 *********** 2025-06-02 13:31:10.285007 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:10.285015 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:10.285022 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:10.285030 | orchestrator | 2025-06-02 13:31:10.285038 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 13:31:10.285046 | orchestrator | Monday 02 June 2025 13:30:46 +0000 (0:00:10.104) 0:02:52.776 *********** 2025-06-02 13:31:10.285054 | orchestrator | changed: [testbed-manager] 2025-06-02 13:31:10.285061 | orchestrator | 2025-06-02 13:31:10.285069 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 13:31:10.285077 | orchestrator | Monday 02 June 2025 13:30:57 +0000 (0:00:10.573) 0:03:03.349 *********** 2025-06-02 13:31:10.285085 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:31:10.285093 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:31:10.285101 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:31:10.285108 | orchestrator | 2025-06-02 13:31:10.285116 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:31:10.285124 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 13:31:10.285136 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 13:31:10.285145 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 13:31:10.285153 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 13:31:10.285161 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 13:31:10.285169 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 13:31:10.285183 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 13:31:10.285191 | orchestrator | 2025-06-02 13:31:10.285199 | orchestrator | 2025-06-02 13:31:10.285207 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:31:10.285214 | orchestrator | Monday 02 June 2025 13:31:08 +0000 (0:00:11.079) 0:03:14.429 *********** 2025-06-02 13:31:10.285222 | orchestrator | =============================================================================== 2025-06-02 13:31:10.285230 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 20.71s 2025-06-02 13:31:10.285238 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.04s 2025-06-02 13:31:10.285246 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.66s 2025-06-02 13:31:10.285253 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.23s 2025-06-02 13:31:10.285261 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.28s 2025-06-02 13:31:10.285269 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.08s 2025-06-02 13:31:10.285277 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.64s 2025-06-02 13:31:10.285285 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.57s 2025-06-02 13:31:10.285292 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.10s 2025-06-02 13:31:10.285300 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.93s 2025-06-02 13:31:10.285308 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.05s 2025-06-02 13:31:10.285316 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.74s 2025-06-02 13:31:10.285323 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.03s 2025-06-02 13:31:10.285362 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.28s 2025-06-02 13:31:10.285372 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.77s 2025-06-02 13:31:10.285380 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 3.10s 2025-06-02 13:31:10.285388 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.00s 2025-06-02 13:31:10.285396 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.94s 2025-06-02 13:31:10.285408 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.89s 2025-06-02 13:31:10.285416 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.57s 2025-06-02 13:31:10.285424 | orchestrator | 2025-06-02 13:31:10 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:10.285432 | orchestrator | 2025-06-02 13:31:10 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:10.285440 | orchestrator | 2025-06-02 13:31:10 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:10.285448 | orchestrator | 2025-06-02 13:31:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:13.328428 | orchestrator | 2025-06-02 13:31:13 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:13.328672 | orchestrator | 2025-06-02 13:31:13 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:13.332088 | orchestrator | 2025-06-02 13:31:13 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:13.332125 | orchestrator | 2025-06-02 13:31:13 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:13.332137 | orchestrator | 2025-06-02 13:31:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:16.379081 | orchestrator | 2025-06-02 13:31:16 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:16.379632 | orchestrator | 2025-06-02 13:31:16 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:16.381456 | orchestrator | 2025-06-02 13:31:16 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:16.382973 | orchestrator | 2025-06-02 13:31:16 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:16.383002 | orchestrator | 2025-06-02 13:31:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:19.432323 | orchestrator | 2025-06-02 13:31:19 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:19.434084 | orchestrator | 2025-06-02 13:31:19 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:19.436189 | orchestrator | 2025-06-02 13:31:19 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:19.438060 | orchestrator | 2025-06-02 13:31:19 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:19.438094 | orchestrator | 2025-06-02 13:31:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:22.478450 | orchestrator | 2025-06-02 13:31:22 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:22.481318 | orchestrator | 2025-06-02 13:31:22 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:22.483039 | orchestrator | 2025-06-02 13:31:22 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:22.484974 | orchestrator | 2025-06-02 13:31:22 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:22.485084 | orchestrator | 2025-06-02 13:31:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:25.521710 | orchestrator | 2025-06-02 13:31:25 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:25.522843 | orchestrator | 2025-06-02 13:31:25 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:25.524526 | orchestrator | 2025-06-02 13:31:25 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:25.526109 | orchestrator | 2025-06-02 13:31:25 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:25.526611 | orchestrator | 2025-06-02 13:31:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:28.571541 | orchestrator | 2025-06-02 13:31:28 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:28.574264 | orchestrator | 2025-06-02 13:31:28 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:28.576799 | orchestrator | 2025-06-02 13:31:28 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:28.578794 | orchestrator | 2025-06-02 13:31:28 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:28.579134 | orchestrator | 2025-06-02 13:31:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:31.626011 | orchestrator | 2025-06-02 13:31:31 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:31.632105 | orchestrator | 2025-06-02 13:31:31 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:31.634446 | orchestrator | 2025-06-02 13:31:31 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:31.636524 | orchestrator | 2025-06-02 13:31:31 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:31.636553 | orchestrator | 2025-06-02 13:31:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:34.685981 | orchestrator | 2025-06-02 13:31:34 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:34.686205 | orchestrator | 2025-06-02 13:31:34 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:34.687693 | orchestrator | 2025-06-02 13:31:34 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:34.689620 | orchestrator | 2025-06-02 13:31:34 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:34.689818 | orchestrator | 2025-06-02 13:31:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:37.736172 | orchestrator | 2025-06-02 13:31:37 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:37.738238 | orchestrator | 2025-06-02 13:31:37 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:37.742127 | orchestrator | 2025-06-02 13:31:37 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:37.744559 | orchestrator | 2025-06-02 13:31:37 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:37.744613 | orchestrator | 2025-06-02 13:31:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:40.791424 | orchestrator | 2025-06-02 13:31:40 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:40.793171 | orchestrator | 2025-06-02 13:31:40 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:40.794232 | orchestrator | 2025-06-02 13:31:40 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:40.795383 | orchestrator | 2025-06-02 13:31:40 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:40.795428 | orchestrator | 2025-06-02 13:31:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:43.850456 | orchestrator | 2025-06-02 13:31:43 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:43.850572 | orchestrator | 2025-06-02 13:31:43 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:43.851068 | orchestrator | 2025-06-02 13:31:43 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:43.852783 | orchestrator | 2025-06-02 13:31:43 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:43.852814 | orchestrator | 2025-06-02 13:31:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:46.885206 | orchestrator | 2025-06-02 13:31:46 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:46.885312 | orchestrator | 2025-06-02 13:31:46 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:46.886137 | orchestrator | 2025-06-02 13:31:46 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:46.886945 | orchestrator | 2025-06-02 13:31:46 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:46.887321 | orchestrator | 2025-06-02 13:31:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:49.917670 | orchestrator | 2025-06-02 13:31:49 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:49.918082 | orchestrator | 2025-06-02 13:31:49 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:49.921120 | orchestrator | 2025-06-02 13:31:49 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:49.921938 | orchestrator | 2025-06-02 13:31:49 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:49.922304 | orchestrator | 2025-06-02 13:31:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:52.950990 | orchestrator | 2025-06-02 13:31:52 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:52.951500 | orchestrator | 2025-06-02 13:31:52 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:52.952316 | orchestrator | 2025-06-02 13:31:52 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:52.953225 | orchestrator | 2025-06-02 13:31:52 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:52.953256 | orchestrator | 2025-06-02 13:31:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:55.984751 | orchestrator | 2025-06-02 13:31:55 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:55.985517 | orchestrator | 2025-06-02 13:31:55 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:55.985550 | orchestrator | 2025-06-02 13:31:55 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:55.986170 | orchestrator | 2025-06-02 13:31:55 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state STARTED 2025-06-02 13:31:55.986196 | orchestrator | 2025-06-02 13:31:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:59.037948 | orchestrator | 2025-06-02 13:31:59 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:31:59.039069 | orchestrator | 2025-06-02 13:31:59 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:31:59.039101 | orchestrator | 2025-06-02 13:31:59 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:31:59.040857 | orchestrator | 2025-06-02 13:31:59 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:31:59.045104 | orchestrator | 2025-06-02 13:31:59 | INFO  | Task 22ee40be-1cdf-4601-a97c-f2ded471b8bc is in state SUCCESS 2025-06-02 13:31:59.046328 | orchestrator | 2025-06-02 13:31:59.046387 | orchestrator | 2025-06-02 13:31:59.046401 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:31:59.046413 | orchestrator | 2025-06-02 13:31:59.046489 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:31:59.046505 | orchestrator | Monday 02 June 2025 13:28:12 +0000 (0:00:00.242) 0:00:00.242 *********** 2025-06-02 13:31:59.046517 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:31:59.046528 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:31:59.046539 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:31:59.046550 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:31:59.046561 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:31:59.046572 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:31:59.046583 | orchestrator | 2025-06-02 13:31:59.046594 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:31:59.046605 | orchestrator | Monday 02 June 2025 13:28:12 +0000 (0:00:00.621) 0:00:00.864 *********** 2025-06-02 13:31:59.046616 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 13:31:59.046627 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 13:31:59.046638 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 13:31:59.046649 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 13:31:59.046683 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 13:31:59.046694 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 13:31:59.046705 | orchestrator | 2025-06-02 13:31:59.046716 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 13:31:59.046830 | orchestrator | 2025-06-02 13:31:59.046842 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 13:31:59.047192 | orchestrator | Monday 02 June 2025 13:28:13 +0000 (0:00:00.608) 0:00:01.472 *********** 2025-06-02 13:31:59.047211 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:31:59.047222 | orchestrator | 2025-06-02 13:31:59.047233 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 13:31:59.047244 | orchestrator | Monday 02 June 2025 13:28:14 +0000 (0:00:01.130) 0:00:02.602 *********** 2025-06-02 13:31:59.047254 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 13:31:59.047265 | orchestrator | 2025-06-02 13:31:59.047276 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 13:31:59.047286 | orchestrator | Monday 02 June 2025 13:28:17 +0000 (0:00:02.885) 0:00:05.488 *********** 2025-06-02 13:31:59.047297 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 13:31:59.047308 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 13:31:59.047318 | orchestrator | 2025-06-02 13:31:59.047329 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 13:31:59.047340 | orchestrator | Monday 02 June 2025 13:28:23 +0000 (0:00:05.583) 0:00:11.071 *********** 2025-06-02 13:31:59.047374 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:31:59.047484 | orchestrator | 2025-06-02 13:31:59.047500 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 13:31:59.047511 | orchestrator | Monday 02 June 2025 13:28:26 +0000 (0:00:02.940) 0:00:14.012 *********** 2025-06-02 13:31:59.047522 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:31:59.047754 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 13:31:59.047770 | orchestrator | 2025-06-02 13:31:59.047781 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 13:31:59.047791 | orchestrator | Monday 02 June 2025 13:28:29 +0000 (0:00:03.462) 0:00:17.474 *********** 2025-06-02 13:31:59.047802 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:31:59.047813 | orchestrator | 2025-06-02 13:31:59.047823 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 13:31:59.047834 | orchestrator | Monday 02 June 2025 13:28:32 +0000 (0:00:02.820) 0:00:20.295 *********** 2025-06-02 13:31:59.047845 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 13:31:59.047855 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 13:31:59.047866 | orchestrator | 2025-06-02 13:31:59.047877 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 13:31:59.047888 | orchestrator | Monday 02 June 2025 13:28:39 +0000 (0:00:07.327) 0:00:27.623 *********** 2025-06-02 13:31:59.048000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.048033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.048045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.048058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.048244 | orchestrator | 2025-06-02 13:31:59.048255 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 13:31:59.048266 | orchestrator | Monday 02 June 2025 13:28:42 +0000 (0:00:02.708) 0:00:30.332 *********** 2025-06-02 13:31:59.048277 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.048288 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.048299 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.048309 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.048320 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.048331 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.048341 | orchestrator | 2025-06-02 13:31:59.048390 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 13:31:59.048403 | orchestrator | Monday 02 June 2025 13:28:42 +0000 (0:00:00.426) 0:00:30.758 *********** 2025-06-02 13:31:59.048413 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.048424 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.048435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.048445 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:31:59.048456 | orchestrator | 2025-06-02 13:31:59.048467 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 13:31:59.048478 | orchestrator | Monday 02 June 2025 13:28:43 +0000 (0:00:00.850) 0:00:31.608 *********** 2025-06-02 13:31:59.048488 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 13:31:59.048499 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 13:31:59.048510 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 13:31:59.048520 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 13:31:59.048531 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 13:31:59.048541 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 13:31:59.048552 | orchestrator | 2025-06-02 13:31:59.048562 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 13:31:59.048573 | orchestrator | Monday 02 June 2025 13:28:45 +0000 (0:00:01.933) 0:00:33.542 *********** 2025-06-02 13:31:59.048585 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 13:31:59.048600 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 13:31:59.048655 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 13:31:59.048670 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 13:31:59.048682 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 13:31:59.048694 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 13:31:59.048706 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 13:31:59.048752 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 13:31:59.048765 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 13:31:59.048777 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 13:31:59.048788 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 13:31:59.048805 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 13:31:59.048816 | orchestrator | 2025-06-02 13:31:59.048828 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 13:31:59.048838 | orchestrator | Monday 02 June 2025 13:28:49 +0000 (0:00:03.543) 0:00:37.085 *********** 2025-06-02 13:31:59.048849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:31:59.048861 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:31:59.048872 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 13:31:59.048882 | orchestrator | 2025-06-02 13:31:59.048893 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 13:31:59.048908 | orchestrator | Monday 02 June 2025 13:28:51 +0000 (0:00:02.182) 0:00:39.268 *********** 2025-06-02 13:31:59.048942 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 13:31:59.048955 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 13:31:59.048965 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 13:31:59.048976 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 13:31:59.048987 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 13:31:59.048997 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 13:31:59.049007 | orchestrator | 2025-06-02 13:31:59.049018 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 13:31:59.049029 | orchestrator | Monday 02 June 2025 13:28:54 +0000 (0:00:02.875) 0:00:42.143 *********** 2025-06-02 13:31:59.049039 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 13:31:59.049050 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 13:31:59.049061 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 13:31:59.049071 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 13:31:59.049081 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 13:31:59.049092 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 13:31:59.049102 | orchestrator | 2025-06-02 13:31:59.049113 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 13:31:59.049123 | orchestrator | Monday 02 June 2025 13:28:55 +0000 (0:00:01.033) 0:00:43.177 *********** 2025-06-02 13:31:59.049134 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.049145 | orchestrator | 2025-06-02 13:31:59.049155 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 13:31:59.049166 | orchestrator | Monday 02 June 2025 13:28:55 +0000 (0:00:00.194) 0:00:43.371 *********** 2025-06-02 13:31:59.049177 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.049187 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.049210 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.049220 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.049231 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.049241 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.049252 | orchestrator | 2025-06-02 13:31:59.049262 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 13:31:59.049273 | orchestrator | Monday 02 June 2025 13:28:56 +0000 (0:00:00.855) 0:00:44.227 *********** 2025-06-02 13:31:59.049284 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:31:59.049296 | orchestrator | 2025-06-02 13:31:59.049307 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 13:31:59.049318 | orchestrator | Monday 02 June 2025 13:28:57 +0000 (0:00:00.946) 0:00:45.174 *********** 2025-06-02 13:31:59.049329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.049341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.049456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.049474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049611 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.049622 | orchestrator | 2025-06-02 13:31:59.049633 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 13:31:59.049644 | orchestrator | Monday 02 June 2025 13:29:00 +0000 (0:00:02.796) 0:00:47.970 *********** 2025-06-02 13:31:59.049665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.049678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.049709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.049739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049749 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.049759 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.049769 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.049790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049820 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.049830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049850 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.049860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049897 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.049907 | orchestrator | 2025-06-02 13:31:59.049917 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 13:31:59.049926 | orchestrator | Monday 02 June 2025 13:29:01 +0000 (0:00:01.710) 0:00:49.681 *********** 2025-06-02 13:31:59.049936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.049947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049956 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.049966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.049977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.049987 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.050006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050076 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.050086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.050107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050144 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.050154 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.050164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050185 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.050194 | orchestrator | 2025-06-02 13:31:59.050204 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 13:31:59.050214 | orchestrator | Monday 02 June 2025 13:29:04 +0000 (0:00:02.410) 0:00:52.091 *********** 2025-06-02 13:31:59.050223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.050234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.050263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.050284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050390 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050406 | orchestrator | 2025-06-02 13:31:59.050415 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 13:31:59.050425 | orchestrator | Monday 02 June 2025 13:29:07 +0000 (0:00:02.828) 0:00:54.920 *********** 2025-06-02 13:31:59.050435 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 13:31:59.050445 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 13:31:59.050454 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.050464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 13:31:59.050473 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 13:31:59.050483 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.050492 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 13:31:59.050506 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.050521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsg2025-06-02 13:31:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:31:59.050532 | orchestrator | i.conf.j2) 2025-06-02 13:31:59.050542 | orchestrator | 2025-06-02 13:31:59.050551 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 13:31:59.050561 | orchestrator | Monday 02 June 2025 13:29:09 +0000 (0:00:02.622) 0:00:57.542 *********** 2025-06-02 13:31:59.050570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.050580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.050590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.050629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.050725 | orchestrator | 2025-06-02 13:31:59.050734 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 13:31:59.050744 | orchestrator | Monday 02 June 2025 13:29:19 +0000 (0:00:10.247) 0:01:07.790 *********** 2025-06-02 13:31:59.050753 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.050763 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.050773 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.050782 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:31:59.050791 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:31:59.050801 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:31:59.050810 | orchestrator | 2025-06-02 13:31:59.050820 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 13:31:59.050829 | orchestrator | Monday 02 June 2025 13:29:22 +0000 (0:00:02.752) 0:01:10.542 *********** 2025-06-02 13:31:59.050839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.050854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050865 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.050884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.050894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 13:31:59.050915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050931 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.050941 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.050950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.050975 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.050991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.051001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.051017 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.051027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.051038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 13:31:59.051048 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.051057 | orchestrator | 2025-06-02 13:31:59.051067 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 13:31:59.051076 | orchestrator | Monday 02 June 2025 13:29:24 +0000 (0:00:01.521) 0:01:12.064 *********** 2025-06-02 13:31:59.051086 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.051095 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.051104 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.051114 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.051123 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.051132 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.051142 | orchestrator | 2025-06-02 13:31:59.051151 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 13:31:59.051161 | orchestrator | Monday 02 June 2025 13:29:25 +0000 (0:00:00.891) 0:01:12.956 *********** 2025-06-02 13:31:59.051183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.051194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.051210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 13:31:59.051220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 13:31:59.051344 | orchestrator | 2025-06-02 13:31:59.051403 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 13:31:59.051414 | orchestrator | Monday 02 June 2025 13:29:27 +0000 (0:00:02.584) 0:01:15.541 *********** 2025-06-02 13:31:59.051424 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.051434 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:31:59.051443 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:31:59.051453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:31:59.051462 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:31:59.051472 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:31:59.051481 | orchestrator | 2025-06-02 13:31:59.051490 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 13:31:59.051498 | orchestrator | Monday 02 June 2025 13:29:28 +0000 (0:00:00.585) 0:01:16.127 *********** 2025-06-02 13:31:59.051506 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:59.051514 | orchestrator | 2025-06-02 13:31:59.051522 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 13:31:59.051529 | orchestrator | Monday 02 June 2025 13:29:29 +0000 (0:00:01.711) 0:01:17.838 *********** 2025-06-02 13:31:59.051537 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:59.051545 | orchestrator | 2025-06-02 13:31:59.051553 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 13:31:59.051560 | orchestrator | Monday 02 June 2025 13:29:31 +0000 (0:00:01.828) 0:01:19.667 *********** 2025-06-02 13:31:59.051568 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:59.051576 | orchestrator | 2025-06-02 13:31:59.051584 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 13:31:59.051591 | orchestrator | Monday 02 June 2025 13:29:50 +0000 (0:00:18.359) 0:01:38.026 *********** 2025-06-02 13:31:59.051599 | orchestrator | 2025-06-02 13:31:59.051607 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 13:31:59.051615 | orchestrator | Monday 02 June 2025 13:29:50 +0000 (0:00:00.057) 0:01:38.084 *********** 2025-06-02 13:31:59.051622 | orchestrator | 2025-06-02 13:31:59.051630 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 13:31:59.051638 | orchestrator | Monday 02 June 2025 13:29:50 +0000 (0:00:00.058) 0:01:38.142 *********** 2025-06-02 13:31:59.051646 | orchestrator | 2025-06-02 13:31:59.051653 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 13:31:59.051661 | orchestrator | Monday 02 June 2025 13:29:50 +0000 (0:00:00.060) 0:01:38.202 *********** 2025-06-02 13:31:59.051669 | orchestrator | 2025-06-02 13:31:59.051677 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 13:31:59.051684 | orchestrator | Monday 02 June 2025 13:29:50 +0000 (0:00:00.057) 0:01:38.260 *********** 2025-06-02 13:31:59.051692 | orchestrator | 2025-06-02 13:31:59.051700 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 13:31:59.051707 | orchestrator | Monday 02 June 2025 13:29:50 +0000 (0:00:00.056) 0:01:38.317 *********** 2025-06-02 13:31:59.051715 | orchestrator | 2025-06-02 13:31:59.051723 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 13:31:59.051731 | orchestrator | Monday 02 June 2025 13:29:50 +0000 (0:00:00.075) 0:01:38.392 *********** 2025-06-02 13:31:59.051738 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:59.051746 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:59.051754 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:59.051761 | orchestrator | 2025-06-02 13:31:59.051769 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 13:31:59.051777 | orchestrator | Monday 02 June 2025 13:30:17 +0000 (0:00:27.078) 0:02:05.471 *********** 2025-06-02 13:31:59.051785 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:31:59.051792 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:31:59.051800 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:31:59.051808 | orchestrator | 2025-06-02 13:31:59.051816 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 13:31:59.051828 | orchestrator | Monday 02 June 2025 13:30:27 +0000 (0:00:09.617) 0:02:15.088 *********** 2025-06-02 13:31:59.051836 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:31:59.051844 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:31:59.051852 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:31:59.051859 | orchestrator | 2025-06-02 13:31:59.051867 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 13:31:59.051875 | orchestrator | Monday 02 June 2025 13:31:46 +0000 (0:01:19.655) 0:03:34.743 *********** 2025-06-02 13:31:59.051883 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:31:59.051890 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:31:59.051898 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:31:59.051906 | orchestrator | 2025-06-02 13:31:59.051914 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 13:31:59.051925 | orchestrator | Monday 02 June 2025 13:31:55 +0000 (0:00:08.197) 0:03:42.940 *********** 2025-06-02 13:31:59.051933 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:31:59.051941 | orchestrator | 2025-06-02 13:31:59.051953 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:31:59.051962 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 13:31:59.051970 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 13:31:59.051978 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 13:31:59.051986 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 13:31:59.051994 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 13:31:59.052001 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 13:31:59.052009 | orchestrator | 2025-06-02 13:31:59.052017 | orchestrator | 2025-06-02 13:31:59.052025 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:31:59.052032 | orchestrator | Monday 02 June 2025 13:31:56 +0000 (0:00:01.296) 0:03:44.237 *********** 2025-06-02 13:31:59.052040 | orchestrator | =============================================================================== 2025-06-02 13:31:59.052048 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 79.66s 2025-06-02 13:31:59.052056 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.08s 2025-06-02 13:31:59.052063 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.36s 2025-06-02 13:31:59.052071 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.25s 2025-06-02 13:31:59.052079 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.62s 2025-06-02 13:31:59.052086 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.20s 2025-06-02 13:31:59.052094 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.33s 2025-06-02 13:31:59.052102 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.58s 2025-06-02 13:31:59.052110 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.54s 2025-06-02 13:31:59.052117 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.46s 2025-06-02 13:31:59.052125 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.94s 2025-06-02 13:31:59.052133 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.89s 2025-06-02 13:31:59.052145 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.88s 2025-06-02 13:31:59.052153 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.83s 2025-06-02 13:31:59.052160 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.82s 2025-06-02 13:31:59.052168 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.80s 2025-06-02 13:31:59.052176 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.75s 2025-06-02 13:31:59.052184 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.71s 2025-06-02 13:31:59.052191 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.62s 2025-06-02 13:31:59.052199 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.58s 2025-06-02 13:32:02.077291 | orchestrator | 2025-06-02 13:32:02 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:02.078011 | orchestrator | 2025-06-02 13:32:02 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:02.078785 | orchestrator | 2025-06-02 13:32:02 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:02.079426 | orchestrator | 2025-06-02 13:32:02 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:02.079450 | orchestrator | 2025-06-02 13:32:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:05.106847 | orchestrator | 2025-06-02 13:32:05 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:05.107603 | orchestrator | 2025-06-02 13:32:05 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:05.108499 | orchestrator | 2025-06-02 13:32:05 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:05.109322 | orchestrator | 2025-06-02 13:32:05 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:05.109513 | orchestrator | 2025-06-02 13:32:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:08.139856 | orchestrator | 2025-06-02 13:32:08 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:08.139947 | orchestrator | 2025-06-02 13:32:08 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:08.139963 | orchestrator | 2025-06-02 13:32:08 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:08.140415 | orchestrator | 2025-06-02 13:32:08 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:08.140450 | orchestrator | 2025-06-02 13:32:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:11.163722 | orchestrator | 2025-06-02 13:32:11 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:11.163786 | orchestrator | 2025-06-02 13:32:11 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:11.171977 | orchestrator | 2025-06-02 13:32:11 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:11.172438 | orchestrator | 2025-06-02 13:32:11 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:11.172462 | orchestrator | 2025-06-02 13:32:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:14.196724 | orchestrator | 2025-06-02 13:32:14 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:14.196823 | orchestrator | 2025-06-02 13:32:14 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:14.196839 | orchestrator | 2025-06-02 13:32:14 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:14.197521 | orchestrator | 2025-06-02 13:32:14 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:14.197549 | orchestrator | 2025-06-02 13:32:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:17.222800 | orchestrator | 2025-06-02 13:32:17 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:17.223773 | orchestrator | 2025-06-02 13:32:17 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:17.225225 | orchestrator | 2025-06-02 13:32:17 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:17.226579 | orchestrator | 2025-06-02 13:32:17 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:17.226608 | orchestrator | 2025-06-02 13:32:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:20.260852 | orchestrator | 2025-06-02 13:32:20 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:20.260943 | orchestrator | 2025-06-02 13:32:20 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:20.261666 | orchestrator | 2025-06-02 13:32:20 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:20.262287 | orchestrator | 2025-06-02 13:32:20 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:20.262312 | orchestrator | 2025-06-02 13:32:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:23.296436 | orchestrator | 2025-06-02 13:32:23 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:23.296771 | orchestrator | 2025-06-02 13:32:23 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:23.300437 | orchestrator | 2025-06-02 13:32:23 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:23.300655 | orchestrator | 2025-06-02 13:32:23 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:23.305868 | orchestrator | 2025-06-02 13:32:23 | INFO  | Task 35b1c43a-1f85-4521-a917-36836747a600 is in state STARTED 2025-06-02 13:32:23.305895 | orchestrator | 2025-06-02 13:32:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:26.342578 | orchestrator | 2025-06-02 13:32:26 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:26.342675 | orchestrator | 2025-06-02 13:32:26 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:26.342693 | orchestrator | 2025-06-02 13:32:26 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:26.343150 | orchestrator | 2025-06-02 13:32:26 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:26.343869 | orchestrator | 2025-06-02 13:32:26 | INFO  | Task 35b1c43a-1f85-4521-a917-36836747a600 is in state STARTED 2025-06-02 13:32:26.343898 | orchestrator | 2025-06-02 13:32:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:29.369427 | orchestrator | 2025-06-02 13:32:29 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:29.370687 | orchestrator | 2025-06-02 13:32:29 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:29.370720 | orchestrator | 2025-06-02 13:32:29 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:29.370733 | orchestrator | 2025-06-02 13:32:29 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:29.371130 | orchestrator | 2025-06-02 13:32:29 | INFO  | Task 35b1c43a-1f85-4521-a917-36836747a600 is in state STARTED 2025-06-02 13:32:29.371153 | orchestrator | 2025-06-02 13:32:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:32.410747 | orchestrator | 2025-06-02 13:32:32 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:32.410841 | orchestrator | 2025-06-02 13:32:32 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:32.410856 | orchestrator | 2025-06-02 13:32:32 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:32.410867 | orchestrator | 2025-06-02 13:32:32 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:32.410889 | orchestrator | 2025-06-02 13:32:32 | INFO  | Task 35b1c43a-1f85-4521-a917-36836747a600 is in state STARTED 2025-06-02 13:32:32.410901 | orchestrator | 2025-06-02 13:32:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:35.446520 | orchestrator | 2025-06-02 13:32:35 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:35.447472 | orchestrator | 2025-06-02 13:32:35 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:35.448305 | orchestrator | 2025-06-02 13:32:35 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:35.449646 | orchestrator | 2025-06-02 13:32:35 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:35.450545 | orchestrator | 2025-06-02 13:32:35 | INFO  | Task 35b1c43a-1f85-4521-a917-36836747a600 is in state STARTED 2025-06-02 13:32:35.450570 | orchestrator | 2025-06-02 13:32:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:38.483862 | orchestrator | 2025-06-02 13:32:38 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:38.483982 | orchestrator | 2025-06-02 13:32:38 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:38.484475 | orchestrator | 2025-06-02 13:32:38 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:38.485276 | orchestrator | 2025-06-02 13:32:38 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:38.485812 | orchestrator | 2025-06-02 13:32:38 | INFO  | Task 35b1c43a-1f85-4521-a917-36836747a600 is in state SUCCESS 2025-06-02 13:32:38.485972 | orchestrator | 2025-06-02 13:32:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:41.518910 | orchestrator | 2025-06-02 13:32:41 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:41.519004 | orchestrator | 2025-06-02 13:32:41 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:41.519737 | orchestrator | 2025-06-02 13:32:41 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:41.523746 | orchestrator | 2025-06-02 13:32:41 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:41.523772 | orchestrator | 2025-06-02 13:32:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:44.557520 | orchestrator | 2025-06-02 13:32:44 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:44.559577 | orchestrator | 2025-06-02 13:32:44 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:44.560494 | orchestrator | 2025-06-02 13:32:44 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:44.561356 | orchestrator | 2025-06-02 13:32:44 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:44.561439 | orchestrator | 2025-06-02 13:32:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:47.594690 | orchestrator | 2025-06-02 13:32:47 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:47.596085 | orchestrator | 2025-06-02 13:32:47 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:47.596189 | orchestrator | 2025-06-02 13:32:47 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:47.596203 | orchestrator | 2025-06-02 13:32:47 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:47.596221 | orchestrator | 2025-06-02 13:32:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:50.631004 | orchestrator | 2025-06-02 13:32:50 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:50.631580 | orchestrator | 2025-06-02 13:32:50 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:50.632463 | orchestrator | 2025-06-02 13:32:50 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:50.633484 | orchestrator | 2025-06-02 13:32:50 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:50.633527 | orchestrator | 2025-06-02 13:32:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:53.683840 | orchestrator | 2025-06-02 13:32:53 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:53.684326 | orchestrator | 2025-06-02 13:32:53 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:53.685186 | orchestrator | 2025-06-02 13:32:53 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:53.686184 | orchestrator | 2025-06-02 13:32:53 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:53.686214 | orchestrator | 2025-06-02 13:32:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:56.718724 | orchestrator | 2025-06-02 13:32:56 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:56.719532 | orchestrator | 2025-06-02 13:32:56 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:56.720422 | orchestrator | 2025-06-02 13:32:56 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:56.721276 | orchestrator | 2025-06-02 13:32:56 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:56.721306 | orchestrator | 2025-06-02 13:32:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:32:59.761005 | orchestrator | 2025-06-02 13:32:59 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:32:59.761681 | orchestrator | 2025-06-02 13:32:59 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:32:59.762714 | orchestrator | 2025-06-02 13:32:59 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:32:59.763632 | orchestrator | 2025-06-02 13:32:59 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:32:59.763665 | orchestrator | 2025-06-02 13:32:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:02.794719 | orchestrator | 2025-06-02 13:33:02 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:02.794939 | orchestrator | 2025-06-02 13:33:02 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:02.795807 | orchestrator | 2025-06-02 13:33:02 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:02.796696 | orchestrator | 2025-06-02 13:33:02 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:33:02.798297 | orchestrator | 2025-06-02 13:33:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:05.828900 | orchestrator | 2025-06-02 13:33:05 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:05.829450 | orchestrator | 2025-06-02 13:33:05 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:05.830251 | orchestrator | 2025-06-02 13:33:05 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:05.831079 | orchestrator | 2025-06-02 13:33:05 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:33:05.831307 | orchestrator | 2025-06-02 13:33:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:08.862880 | orchestrator | 2025-06-02 13:33:08 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:08.863573 | orchestrator | 2025-06-02 13:33:08 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:08.865488 | orchestrator | 2025-06-02 13:33:08 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:08.866075 | orchestrator | 2025-06-02 13:33:08 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:33:08.866107 | orchestrator | 2025-06-02 13:33:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:11.906714 | orchestrator | 2025-06-02 13:33:11 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:11.906803 | orchestrator | 2025-06-02 13:33:11 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:11.907158 | orchestrator | 2025-06-02 13:33:11 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:11.907731 | orchestrator | 2025-06-02 13:33:11 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:33:11.907752 | orchestrator | 2025-06-02 13:33:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:14.937692 | orchestrator | 2025-06-02 13:33:14 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:14.937799 | orchestrator | 2025-06-02 13:33:14 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:14.938328 | orchestrator | 2025-06-02 13:33:14 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:14.938826 | orchestrator | 2025-06-02 13:33:14 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state STARTED 2025-06-02 13:33:14.938850 | orchestrator | 2025-06-02 13:33:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:17.973457 | orchestrator | 2025-06-02 13:33:17 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:17.973871 | orchestrator | 2025-06-02 13:33:17 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:17.974401 | orchestrator | 2025-06-02 13:33:17 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:17.975843 | orchestrator | 2025-06-02 13:33:17 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:17.976994 | orchestrator | 2025-06-02 13:33:17 | INFO  | Task 42e65903-d471-411a-b811-67a5208c9ead is in state SUCCESS 2025-06-02 13:33:17.978426 | orchestrator | 2025-06-02 13:33:17.978465 | orchestrator | None 2025-06-02 13:33:17.978478 | orchestrator | 2025-06-02 13:33:17.978501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:33:17.978513 | orchestrator | 2025-06-02 13:33:17.978524 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:33:17.978535 | orchestrator | Monday 02 June 2025 13:31:13 +0000 (0:00:00.255) 0:00:00.255 *********** 2025-06-02 13:33:17.978545 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:33:17.978557 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:33:17.979018 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:33:17.979046 | orchestrator | 2025-06-02 13:33:17.979058 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:33:17.979088 | orchestrator | Monday 02 June 2025 13:31:13 +0000 (0:00:00.275) 0:00:00.531 *********** 2025-06-02 13:33:17.979099 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 13:33:17.979110 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 13:33:17.979121 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 13:33:17.979132 | orchestrator | 2025-06-02 13:33:17.979143 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 13:33:17.979154 | orchestrator | 2025-06-02 13:33:17.979164 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 13:33:17.979176 | orchestrator | Monday 02 June 2025 13:31:13 +0000 (0:00:00.397) 0:00:00.928 *********** 2025-06-02 13:33:17.979188 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:33:17.979199 | orchestrator | 2025-06-02 13:33:17.979210 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 13:33:17.979221 | orchestrator | Monday 02 June 2025 13:31:14 +0000 (0:00:00.511) 0:00:01.439 *********** 2025-06-02 13:33:17.979232 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 13:33:17.979283 | orchestrator | 2025-06-02 13:33:17.979317 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 13:33:17.979328 | orchestrator | Monday 02 June 2025 13:31:17 +0000 (0:00:03.135) 0:00:04.575 *********** 2025-06-02 13:33:17.979339 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 13:33:17.979350 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 13:33:17.979361 | orchestrator | 2025-06-02 13:33:17.979399 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 13:33:17.979423 | orchestrator | Monday 02 June 2025 13:31:23 +0000 (0:00:06.085) 0:00:10.661 *********** 2025-06-02 13:33:17.979434 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:33:17.979445 | orchestrator | 2025-06-02 13:33:17.979456 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 13:33:17.979467 | orchestrator | Monday 02 June 2025 13:31:26 +0000 (0:00:02.983) 0:00:13.645 *********** 2025-06-02 13:33:17.979477 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:33:17.979488 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 13:33:17.979499 | orchestrator | 2025-06-02 13:33:17.979510 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 13:33:17.979521 | orchestrator | Monday 02 June 2025 13:31:30 +0000 (0:00:03.725) 0:00:17.370 *********** 2025-06-02 13:33:17.979531 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:33:17.979542 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 13:33:17.979553 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 13:33:17.979563 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 13:33:17.979574 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 13:33:17.979585 | orchestrator | 2025-06-02 13:33:17.979610 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 13:33:17.979621 | orchestrator | Monday 02 June 2025 13:31:45 +0000 (0:00:15.236) 0:00:32.606 *********** 2025-06-02 13:33:17.979632 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 13:33:17.979644 | orchestrator | 2025-06-02 13:33:17.979657 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 13:33:17.979669 | orchestrator | Monday 02 June 2025 13:31:49 +0000 (0:00:04.013) 0:00:36.620 *********** 2025-06-02 13:33:17.979685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.979714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.979729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.979747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.979768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.979781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.979803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.979817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.979829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.979842 | orchestrator | 2025-06-02 13:33:17.979854 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 13:33:17.979866 | orchestrator | Monday 02 June 2025 13:31:51 +0000 (0:00:01.938) 0:00:38.559 *********** 2025-06-02 13:33:17.979879 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 13:33:17.979891 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 13:33:17.979908 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 13:33:17.979921 | orchestrator | 2025-06-02 13:33:17.979934 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 13:33:17.979946 | orchestrator | Monday 02 June 2025 13:31:52 +0000 (0:00:01.287) 0:00:39.847 *********** 2025-06-02 13:33:17.979963 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:33:17.979973 | orchestrator | 2025-06-02 13:33:17.979984 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 13:33:17.979995 | orchestrator | Monday 02 June 2025 13:31:52 +0000 (0:00:00.109) 0:00:39.956 *********** 2025-06-02 13:33:17.980006 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:33:17.980016 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:33:17.980027 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:33:17.980037 | orchestrator | 2025-06-02 13:33:17.980048 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 13:33:17.980059 | orchestrator | Monday 02 June 2025 13:31:53 +0000 (0:00:00.516) 0:00:40.472 *********** 2025-06-02 13:33:17.980070 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:33:17.980080 | orchestrator | 2025-06-02 13:33:17.980091 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 13:33:17.980102 | orchestrator | Monday 02 June 2025 13:31:53 +0000 (0:00:00.549) 0:00:41.022 *********** 2025-06-02 13:33:17.980113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.980131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.980143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.980164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980239 | orchestrator | 2025-06-02 13:33:17.980250 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 13:33:17.980267 | orchestrator | Monday 02 June 2025 13:31:58 +0000 (0:00:04.561) 0:00:45.584 *********** 2025-06-02 13:33:17.980283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.980294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:33:17.980336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.980348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.980415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980426 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:33:17.980438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980449 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:33:17.980460 | orchestrator | 2025-06-02 13:33:17.980477 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 13:33:17.980489 | orchestrator | Monday 02 June 2025 13:32:00 +0000 (0:00:01.639) 0:00:47.223 *********** 2025-06-02 13:33:17.980536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.980556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980589 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:33:17.980600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.980612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980642 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:33:17.980659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.980675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.980698 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:33:17.980709 | orchestrator | 2025-06-02 13:33:17.980719 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 13:33:17.980730 | orchestrator | Monday 02 June 2025 13:32:00 +0000 (0:00:00.975) 0:00:48.198 *********** 2025-06-02 13:33:17.980741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.980760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbic2025-06-02 13:33:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:17.980773 | orchestrator | an-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.980790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.980806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.980887 | orchestrator | 2025-06-02 13:33:17.980898 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 13:33:17.980909 | orchestrator | Monday 02 June 2025 13:32:04 +0000 (0:00:03.803) 0:00:52.001 *********** 2025-06-02 13:33:17.980920 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:33:17.980931 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:33:17.980942 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:33:17.980952 | orchestrator | 2025-06-02 13:33:17.980967 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 13:33:17.980978 | orchestrator | Monday 02 June 2025 13:32:07 +0000 (0:00:02.293) 0:00:54.295 *********** 2025-06-02 13:33:17.980990 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:33:17.981000 | orchestrator | 2025-06-02 13:33:17.981011 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 13:33:17.981022 | orchestrator | Monday 02 June 2025 13:32:08 +0000 (0:00:01.305) 0:00:55.600 *********** 2025-06-02 13:33:17.981033 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:33:17.981043 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:33:17.981054 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:33:17.981065 | orchestrator | 2025-06-02 13:33:17.981075 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 13:33:17.981086 | orchestrator | Monday 02 June 2025 13:32:09 +0000 (0:00:01.094) 0:00:56.696 *********** 2025-06-02 13:33:17.981097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.981115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.981134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.981147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981233 | orchestrator | 2025-06-02 13:33:17.981244 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 13:33:17.981255 | orchestrator | Monday 02 June 2025 13:32:17 +0000 (0:00:08.512) 0:01:05.209 *********** 2025-06-02 13:33:17.981270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.981281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.981293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.981310 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:33:17.981327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.981338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.981350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.981361 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:33:17.981425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 13:33:17.981437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.981456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:33:17.981467 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:33:17.981478 | orchestrator | 2025-06-02 13:33:17.981489 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 13:33:17.981500 | orchestrator | Monday 02 June 2025 13:32:19 +0000 (0:00:01.174) 0:01:06.383 *********** 2025-06-02 13:33:17.981519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.981531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.981547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 13:33:17.981559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:33:17.981648 | orchestrator | 2025-06-02 13:33:17.981660 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 13:33:17.981671 | orchestrator | Monday 02 June 2025 13:32:22 +0000 (0:00:02.984) 0:01:09.368 *********** 2025-06-02 13:33:17.981687 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:33:17.981698 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:33:17.981709 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:33:17.981720 | orchestrator | 2025-06-02 13:33:17.981731 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 13:33:17.981741 | orchestrator | Monday 02 June 2025 13:32:22 +0000 (0:00:00.664) 0:01:10.032 *********** 2025-06-02 13:33:17.981752 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:33:17.981763 | orchestrator | 2025-06-02 13:33:17.981774 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 13:33:17.981784 | orchestrator | Monday 02 June 2025 13:32:24 +0000 (0:00:02.177) 0:01:12.210 *********** 2025-06-02 13:33:17.981795 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:33:17.981805 | orchestrator | 2025-06-02 13:33:17.981816 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 13:33:17.981827 | orchestrator | Monday 02 June 2025 13:32:27 +0000 (0:00:02.019) 0:01:14.230 *********** 2025-06-02 13:33:17.981837 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:33:17.981848 | orchestrator | 2025-06-02 13:33:17.981859 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 13:33:17.981869 | orchestrator | Monday 02 June 2025 13:32:39 +0000 (0:00:12.509) 0:01:26.739 *********** 2025-06-02 13:33:17.981880 | orchestrator | 2025-06-02 13:33:17.981891 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 13:33:17.981901 | orchestrator | Monday 02 June 2025 13:32:39 +0000 (0:00:00.214) 0:01:26.953 *********** 2025-06-02 13:33:17.981912 | orchestrator | 2025-06-02 13:33:17.981922 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 13:33:17.981933 | orchestrator | Monday 02 June 2025 13:32:39 +0000 (0:00:00.200) 0:01:27.153 *********** 2025-06-02 13:33:17.981942 | orchestrator | 2025-06-02 13:33:17.981952 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 13:33:17.981961 | orchestrator | Monday 02 June 2025 13:32:40 +0000 (0:00:00.127) 0:01:27.281 *********** 2025-06-02 13:33:17.981971 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:33:17.981980 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:33:17.981990 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:33:17.981999 | orchestrator | 2025-06-02 13:33:17.982009 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 13:33:17.982082 | orchestrator | Monday 02 June 2025 13:32:51 +0000 (0:00:11.520) 0:01:38.802 *********** 2025-06-02 13:33:17.982093 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:33:17.982103 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:33:17.982112 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:33:17.982122 | orchestrator | 2025-06-02 13:33:17.982131 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 13:33:17.982141 | orchestrator | Monday 02 June 2025 13:33:02 +0000 (0:00:10.772) 0:01:49.574 *********** 2025-06-02 13:33:17.982150 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:33:17.982159 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:33:17.982169 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:33:17.982178 | orchestrator | 2025-06-02 13:33:17.982188 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:33:17.982198 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 13:33:17.982208 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:33:17.982218 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:33:17.982228 | orchestrator | 2025-06-02 13:33:17.982237 | orchestrator | 2025-06-02 13:33:17.982247 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:33:17.982263 | orchestrator | Monday 02 June 2025 13:33:14 +0000 (0:00:11.811) 0:02:01.386 *********** 2025-06-02 13:33:17.982272 | orchestrator | =============================================================================== 2025-06-02 13:33:17.982282 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.24s 2025-06-02 13:33:17.982291 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.51s 2025-06-02 13:33:17.982300 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.81s 2025-06-02 13:33:17.982310 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.52s 2025-06-02 13:33:17.982319 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.77s 2025-06-02 13:33:17.982329 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.51s 2025-06-02 13:33:17.982338 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.09s 2025-06-02 13:33:17.982352 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.56s 2025-06-02 13:33:17.982362 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.01s 2025-06-02 13:33:17.982388 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.80s 2025-06-02 13:33:17.982398 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.73s 2025-06-02 13:33:17.982408 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.14s 2025-06-02 13:33:17.982417 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.98s 2025-06-02 13:33:17.982427 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 2.98s 2025-06-02 13:33:17.982436 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.29s 2025-06-02 13:33:17.982446 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.18s 2025-06-02 13:33:17.982455 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.02s 2025-06-02 13:33:17.982465 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.94s 2025-06-02 13:33:17.982474 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.64s 2025-06-02 13:33:17.982484 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.31s 2025-06-02 13:33:21.001849 | orchestrator | 2025-06-02 13:33:21 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:21.002103 | orchestrator | 2025-06-02 13:33:21 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:21.002580 | orchestrator | 2025-06-02 13:33:21 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:21.003078 | orchestrator | 2025-06-02 13:33:21 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:21.003101 | orchestrator | 2025-06-02 13:33:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:24.035670 | orchestrator | 2025-06-02 13:33:24 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:24.035775 | orchestrator | 2025-06-02 13:33:24 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:24.036132 | orchestrator | 2025-06-02 13:33:24 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:24.036826 | orchestrator | 2025-06-02 13:33:24 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:24.036848 | orchestrator | 2025-06-02 13:33:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:27.083162 | orchestrator | 2025-06-02 13:33:27 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:27.086303 | orchestrator | 2025-06-02 13:33:27 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:27.086560 | orchestrator | 2025-06-02 13:33:27 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:27.087267 | orchestrator | 2025-06-02 13:33:27 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:27.087291 | orchestrator | 2025-06-02 13:33:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:30.121246 | orchestrator | 2025-06-02 13:33:30 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:30.121678 | orchestrator | 2025-06-02 13:33:30 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:30.122490 | orchestrator | 2025-06-02 13:33:30 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:30.123221 | orchestrator | 2025-06-02 13:33:30 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:30.123251 | orchestrator | 2025-06-02 13:33:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:33.168560 | orchestrator | 2025-06-02 13:33:33 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:33.171024 | orchestrator | 2025-06-02 13:33:33 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:33.172795 | orchestrator | 2025-06-02 13:33:33 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:33.174346 | orchestrator | 2025-06-02 13:33:33 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:33.174412 | orchestrator | 2025-06-02 13:33:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:36.215287 | orchestrator | 2025-06-02 13:33:36 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:36.217116 | orchestrator | 2025-06-02 13:33:36 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:36.218727 | orchestrator | 2025-06-02 13:33:36 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:36.220130 | orchestrator | 2025-06-02 13:33:36 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:36.220595 | orchestrator | 2025-06-02 13:33:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:39.273669 | orchestrator | 2025-06-02 13:33:39 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:39.274202 | orchestrator | 2025-06-02 13:33:39 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:39.277047 | orchestrator | 2025-06-02 13:33:39 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:39.278511 | orchestrator | 2025-06-02 13:33:39 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:39.278549 | orchestrator | 2025-06-02 13:33:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:42.321201 | orchestrator | 2025-06-02 13:33:42 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:42.321518 | orchestrator | 2025-06-02 13:33:42 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:42.322566 | orchestrator | 2025-06-02 13:33:42 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:42.323555 | orchestrator | 2025-06-02 13:33:42 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:42.323580 | orchestrator | 2025-06-02 13:33:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:45.374732 | orchestrator | 2025-06-02 13:33:45 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:45.374847 | orchestrator | 2025-06-02 13:33:45 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:45.375401 | orchestrator | 2025-06-02 13:33:45 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:45.376187 | orchestrator | 2025-06-02 13:33:45 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:45.376210 | orchestrator | 2025-06-02 13:33:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:48.412595 | orchestrator | 2025-06-02 13:33:48 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:48.414452 | orchestrator | 2025-06-02 13:33:48 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:48.416682 | orchestrator | 2025-06-02 13:33:48 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:48.418544 | orchestrator | 2025-06-02 13:33:48 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:48.418587 | orchestrator | 2025-06-02 13:33:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:51.460964 | orchestrator | 2025-06-02 13:33:51 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:51.462242 | orchestrator | 2025-06-02 13:33:51 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:51.463116 | orchestrator | 2025-06-02 13:33:51 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:51.464406 | orchestrator | 2025-06-02 13:33:51 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:51.464447 | orchestrator | 2025-06-02 13:33:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:54.502811 | orchestrator | 2025-06-02 13:33:54 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:54.502905 | orchestrator | 2025-06-02 13:33:54 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:54.502921 | orchestrator | 2025-06-02 13:33:54 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:54.502933 | orchestrator | 2025-06-02 13:33:54 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:54.502944 | orchestrator | 2025-06-02 13:33:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:33:57.535663 | orchestrator | 2025-06-02 13:33:57 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:33:57.536147 | orchestrator | 2025-06-02 13:33:57 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:33:57.538573 | orchestrator | 2025-06-02 13:33:57 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state STARTED 2025-06-02 13:33:57.541075 | orchestrator | 2025-06-02 13:33:57 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:33:57.541099 | orchestrator | 2025-06-02 13:33:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:00.574644 | orchestrator | 2025-06-02 13:34:00 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:00.574845 | orchestrator | 2025-06-02 13:34:00 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:00.576207 | orchestrator | 2025-06-02 13:34:00 | INFO  | Task 5e5b7926-0cb5-44e1-a894-7a9bd303fe9f is in state SUCCESS 2025-06-02 13:34:00.576261 | orchestrator | 2025-06-02 13:34:00 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:00.576274 | orchestrator | 2025-06-02 13:34:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:03.615757 | orchestrator | 2025-06-02 13:34:03 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:03.617195 | orchestrator | 2025-06-02 13:34:03 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:03.617713 | orchestrator | 2025-06-02 13:34:03 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:03.619137 | orchestrator | 2025-06-02 13:34:03 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:03.619164 | orchestrator | 2025-06-02 13:34:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:06.652629 | orchestrator | 2025-06-02 13:34:06 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:06.655203 | orchestrator | 2025-06-02 13:34:06 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:06.659496 | orchestrator | 2025-06-02 13:34:06 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:06.660854 | orchestrator | 2025-06-02 13:34:06 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:06.661002 | orchestrator | 2025-06-02 13:34:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:09.706782 | orchestrator | 2025-06-02 13:34:09 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:09.707411 | orchestrator | 2025-06-02 13:34:09 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:09.712005 | orchestrator | 2025-06-02 13:34:09 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:09.713458 | orchestrator | 2025-06-02 13:34:09 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:09.713498 | orchestrator | 2025-06-02 13:34:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:12.755820 | orchestrator | 2025-06-02 13:34:12 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:12.757548 | orchestrator | 2025-06-02 13:34:12 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:12.758137 | orchestrator | 2025-06-02 13:34:12 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:12.759039 | orchestrator | 2025-06-02 13:34:12 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:12.759062 | orchestrator | 2025-06-02 13:34:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:15.796346 | orchestrator | 2025-06-02 13:34:15 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:15.798876 | orchestrator | 2025-06-02 13:34:15 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:15.799400 | orchestrator | 2025-06-02 13:34:15 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:15.800344 | orchestrator | 2025-06-02 13:34:15 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:15.800385 | orchestrator | 2025-06-02 13:34:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:18.837735 | orchestrator | 2025-06-02 13:34:18 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:18.841592 | orchestrator | 2025-06-02 13:34:18 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:18.841678 | orchestrator | 2025-06-02 13:34:18 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:18.843230 | orchestrator | 2025-06-02 13:34:18 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:18.843559 | orchestrator | 2025-06-02 13:34:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:21.881963 | orchestrator | 2025-06-02 13:34:21 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:21.882193 | orchestrator | 2025-06-02 13:34:21 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:21.889007 | orchestrator | 2025-06-02 13:34:21 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:21.891525 | orchestrator | 2025-06-02 13:34:21 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:21.891572 | orchestrator | 2025-06-02 13:34:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:24.923130 | orchestrator | 2025-06-02 13:34:24 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:24.924669 | orchestrator | 2025-06-02 13:34:24 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:24.925286 | orchestrator | 2025-06-02 13:34:24 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:24.926933 | orchestrator | 2025-06-02 13:34:24 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:24.926979 | orchestrator | 2025-06-02 13:34:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:27.957190 | orchestrator | 2025-06-02 13:34:27 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:27.959890 | orchestrator | 2025-06-02 13:34:27 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:27.960431 | orchestrator | 2025-06-02 13:34:27 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:27.962974 | orchestrator | 2025-06-02 13:34:27 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:27.963022 | orchestrator | 2025-06-02 13:34:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:30.989892 | orchestrator | 2025-06-02 13:34:30 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:30.991604 | orchestrator | 2025-06-02 13:34:30 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:30.991727 | orchestrator | 2025-06-02 13:34:30 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:30.992415 | orchestrator | 2025-06-02 13:34:30 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:30.992885 | orchestrator | 2025-06-02 13:34:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:34.030097 | orchestrator | 2025-06-02 13:34:34 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:34.034823 | orchestrator | 2025-06-02 13:34:34 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:34.035233 | orchestrator | 2025-06-02 13:34:34 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:34.037305 | orchestrator | 2025-06-02 13:34:34 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:34.037491 | orchestrator | 2025-06-02 13:34:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:37.081684 | orchestrator | 2025-06-02 13:34:37 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:37.082560 | orchestrator | 2025-06-02 13:34:37 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:37.084438 | orchestrator | 2025-06-02 13:34:37 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:37.085501 | orchestrator | 2025-06-02 13:34:37 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:37.085535 | orchestrator | 2025-06-02 13:34:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:40.136469 | orchestrator | 2025-06-02 13:34:40 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:40.140001 | orchestrator | 2025-06-02 13:34:40 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:40.141825 | orchestrator | 2025-06-02 13:34:40 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:40.143126 | orchestrator | 2025-06-02 13:34:40 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:40.143160 | orchestrator | 2025-06-02 13:34:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:43.189035 | orchestrator | 2025-06-02 13:34:43 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:43.189578 | orchestrator | 2025-06-02 13:34:43 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:43.190601 | orchestrator | 2025-06-02 13:34:43 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:43.191569 | orchestrator | 2025-06-02 13:34:43 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:43.191605 | orchestrator | 2025-06-02 13:34:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:46.249073 | orchestrator | 2025-06-02 13:34:46 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:46.250517 | orchestrator | 2025-06-02 13:34:46 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:46.258549 | orchestrator | 2025-06-02 13:34:46 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:46.258588 | orchestrator | 2025-06-02 13:34:46 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:46.258653 | orchestrator | 2025-06-02 13:34:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:49.305586 | orchestrator | 2025-06-02 13:34:49 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:49.305687 | orchestrator | 2025-06-02 13:34:49 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:49.307502 | orchestrator | 2025-06-02 13:34:49 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:49.308439 | orchestrator | 2025-06-02 13:34:49 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:49.308980 | orchestrator | 2025-06-02 13:34:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:52.339858 | orchestrator | 2025-06-02 13:34:52 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:52.340165 | orchestrator | 2025-06-02 13:34:52 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:52.341554 | orchestrator | 2025-06-02 13:34:52 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:52.341945 | orchestrator | 2025-06-02 13:34:52 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:52.342137 | orchestrator | 2025-06-02 13:34:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:55.389950 | orchestrator | 2025-06-02 13:34:55 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:55.391241 | orchestrator | 2025-06-02 13:34:55 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:55.392546 | orchestrator | 2025-06-02 13:34:55 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:55.394415 | orchestrator | 2025-06-02 13:34:55 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:55.394482 | orchestrator | 2025-06-02 13:34:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:34:58.439995 | orchestrator | 2025-06-02 13:34:58 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:34:58.441587 | orchestrator | 2025-06-02 13:34:58 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:34:58.444102 | orchestrator | 2025-06-02 13:34:58 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:34:58.446011 | orchestrator | 2025-06-02 13:34:58 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:34:58.446167 | orchestrator | 2025-06-02 13:34:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:01.494708 | orchestrator | 2025-06-02 13:35:01 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:35:01.496832 | orchestrator | 2025-06-02 13:35:01 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:01.500473 | orchestrator | 2025-06-02 13:35:01 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:35:01.502093 | orchestrator | 2025-06-02 13:35:01 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:35:01.502725 | orchestrator | 2025-06-02 13:35:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:04.560087 | orchestrator | 2025-06-02 13:35:04 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state STARTED 2025-06-02 13:35:04.561531 | orchestrator | 2025-06-02 13:35:04 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:04.563528 | orchestrator | 2025-06-02 13:35:04 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:35:04.565466 | orchestrator | 2025-06-02 13:35:04 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:35:04.565512 | orchestrator | 2025-06-02 13:35:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:07.625592 | orchestrator | 2025-06-02 13:35:07.625841 | orchestrator | 2025-06-02 13:35:07.625852 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-02 13:35:07.625859 | orchestrator | 2025-06-02 13:35:07.625865 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-02 13:35:07.625872 | orchestrator | Monday 02 June 2025 13:33:23 +0000 (0:00:00.189) 0:00:00.189 *********** 2025-06-02 13:35:07.625878 | orchestrator | changed: [localhost] 2025-06-02 13:35:07.625885 | orchestrator | 2025-06-02 13:35:07.625891 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-02 13:35:07.625897 | orchestrator | Monday 02 June 2025 13:33:24 +0000 (0:00:00.986) 0:00:01.176 *********** 2025-06-02 13:35:07.625903 | orchestrator | changed: [localhost] 2025-06-02 13:35:07.625909 | orchestrator | 2025-06-02 13:35:07.625915 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-02 13:35:07.625921 | orchestrator | Monday 02 June 2025 13:33:54 +0000 (0:00:30.227) 0:00:31.403 *********** 2025-06-02 13:35:07.625946 | orchestrator | changed: [localhost] 2025-06-02 13:35:07.625952 | orchestrator | 2025-06-02 13:35:07.625958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:35:07.625964 | orchestrator | 2025-06-02 13:35:07.626064 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:35:07.626076 | orchestrator | Monday 02 June 2025 13:33:58 +0000 (0:00:03.951) 0:00:35.354 *********** 2025-06-02 13:35:07.626082 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:35:07.626088 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:35:07.626094 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:35:07.626099 | orchestrator | 2025-06-02 13:35:07.626105 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:35:07.626111 | orchestrator | Monday 02 June 2025 13:33:58 +0000 (0:00:00.260) 0:00:35.615 *********** 2025-06-02 13:35:07.626117 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-02 13:35:07.626124 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-02 13:35:07.626130 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-02 13:35:07.626136 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-02 13:35:07.626142 | orchestrator | 2025-06-02 13:35:07.626148 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-02 13:35:07.626154 | orchestrator | skipping: no hosts matched 2025-06-02 13:35:07.626161 | orchestrator | 2025-06-02 13:35:07.626167 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:35:07.626175 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:35:07.626184 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:35:07.626193 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:35:07.626199 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:35:07.626206 | orchestrator | 2025-06-02 13:35:07.626212 | orchestrator | 2025-06-02 13:35:07.626219 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:35:07.626226 | orchestrator | Monday 02 June 2025 13:33:59 +0000 (0:00:00.572) 0:00:36.187 *********** 2025-06-02 13:35:07.626232 | orchestrator | =============================================================================== 2025-06-02 13:35:07.626239 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 30.23s 2025-06-02 13:35:07.626246 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.95s 2025-06-02 13:35:07.626256 | orchestrator | Ensure the destination directory exists --------------------------------- 0.99s 2025-06-02 13:35:07.626263 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-06-02 13:35:07.626270 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-06-02 13:35:07.626276 | orchestrator | 2025-06-02 13:35:07.626283 | orchestrator | 2025-06-02 13:35:07.626290 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:35:07.626306 | orchestrator | 2025-06-02 13:35:07.626313 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:35:07.626320 | orchestrator | Monday 02 June 2025 13:32:03 +0000 (0:00:00.477) 0:00:00.477 *********** 2025-06-02 13:35:07.626327 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:35:07.626334 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:35:07.626340 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:35:07.626347 | orchestrator | 2025-06-02 13:35:07.626353 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:35:07.626384 | orchestrator | Monday 02 June 2025 13:32:04 +0000 (0:00:00.397) 0:00:00.875 *********** 2025-06-02 13:35:07.626417 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-02 13:35:07.626428 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-02 13:35:07.626438 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-02 13:35:07.626446 | orchestrator | 2025-06-02 13:35:07.626456 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-02 13:35:07.626466 | orchestrator | 2025-06-02 13:35:07.626476 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 13:35:07.626486 | orchestrator | Monday 02 June 2025 13:32:04 +0000 (0:00:00.769) 0:00:01.644 *********** 2025-06-02 13:35:07.626496 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:35:07.626507 | orchestrator | 2025-06-02 13:35:07.626516 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-02 13:35:07.626525 | orchestrator | Monday 02 June 2025 13:32:06 +0000 (0:00:01.096) 0:00:02.740 *********** 2025-06-02 13:35:07.626544 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-02 13:35:07.626550 | orchestrator | 2025-06-02 13:35:07.626556 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-02 13:35:07.626562 | orchestrator | Monday 02 June 2025 13:32:09 +0000 (0:00:03.316) 0:00:06.056 *********** 2025-06-02 13:35:07.626568 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-02 13:35:07.626574 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-02 13:35:07.626579 | orchestrator | 2025-06-02 13:35:07.626586 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-02 13:35:07.626591 | orchestrator | Monday 02 June 2025 13:32:15 +0000 (0:00:06.257) 0:00:12.314 *********** 2025-06-02 13:35:07.626597 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:35:07.626603 | orchestrator | 2025-06-02 13:35:07.626609 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-02 13:35:07.626614 | orchestrator | Monday 02 June 2025 13:32:18 +0000 (0:00:03.128) 0:00:15.443 *********** 2025-06-02 13:35:07.626620 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:35:07.626626 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-02 13:35:07.626631 | orchestrator | 2025-06-02 13:35:07.626637 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-02 13:35:07.626643 | orchestrator | Monday 02 June 2025 13:32:22 +0000 (0:00:03.840) 0:00:19.283 *********** 2025-06-02 13:35:07.626648 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:35:07.626654 | orchestrator | 2025-06-02 13:35:07.626660 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-02 13:35:07.626666 | orchestrator | Monday 02 June 2025 13:32:26 +0000 (0:00:03.562) 0:00:22.846 *********** 2025-06-02 13:35:07.626671 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-02 13:35:07.626677 | orchestrator | 2025-06-02 13:35:07.626683 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-02 13:35:07.626688 | orchestrator | Monday 02 June 2025 13:32:30 +0000 (0:00:03.957) 0:00:26.803 *********** 2025-06-02 13:35:07.626696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.626713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.626731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.626739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.626863 | orchestrator | 2025-06-02 13:35:07.626873 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-02 13:35:07.626883 | orchestrator | Monday 02 June 2025 13:32:34 +0000 (0:00:04.115) 0:00:30.919 *********** 2025-06-02 13:35:07.626892 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:07.626901 | orchestrator | 2025-06-02 13:35:07.626910 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-02 13:35:07.626927 | orchestrator | Monday 02 June 2025 13:32:34 +0000 (0:00:00.132) 0:00:31.051 *********** 2025-06-02 13:35:07.626937 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:07.626947 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:07.626954 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:07.626959 | orchestrator | 2025-06-02 13:35:07.626965 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 13:35:07.626971 | orchestrator | Monday 02 June 2025 13:32:34 +0000 (0:00:00.384) 0:00:31.436 *********** 2025-06-02 13:35:07.626976 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:35:07.626982 | orchestrator | 2025-06-02 13:35:07.626988 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-02 13:35:07.626994 | orchestrator | Monday 02 June 2025 13:32:36 +0000 (0:00:01.422) 0:00:32.858 *********** 2025-06-02 13:35:07.627000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.627010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.627022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.627028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627152 | orchestrator | 2025-06-02 13:35:07.627158 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-02 13:35:07.627164 | orchestrator | Monday 02 June 2025 13:32:42 +0000 (0:00:05.995) 0:00:38.854 *********** 2025-06-02 13:35:07.627170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.627179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.627189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627218 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:07.627224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.627233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.627239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627272 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:07.627278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.627284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.627293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627326 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:07.627332 | orchestrator | 2025-06-02 13:35:07.627338 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-02 13:35:07.627344 | orchestrator | Monday 02 June 2025 13:32:43 +0000 (0:00:01.447) 0:00:40.302 *********** 2025-06-02 13:35:07.627350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.627359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.627389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627435 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:07.627445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.627455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.627469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627505 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:07.627515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.627525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.627539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.627591 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:07.627600 | orchestrator | 2025-06-02 13:35:07.627610 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-02 13:35:07.627621 | orchestrator | Monday 02 June 2025 13:32:45 +0000 (0:00:01.526) 0:00:41.828 *********** 2025-06-02 13:35:07.627630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.627644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.627671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.627682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.627781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/2025-06-02 13:35:07 | INFO  | Task e33e47bf-4ffe-476b-b4c1-9c5ac68c904f is in state SUCCESS 2025-06-02 13:35:07.628300 | orchestrator | log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628479 | orchestrator | 2025-06-02 13:35:07.628492 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-02 13:35:07.628504 | orchestrator | Monday 02 June 2025 13:32:51 +0000 (0:00:06.521) 0:00:48.350 *********** 2025-06-02 13:35:07.628516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.628592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.628626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.628640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.628865 | orchestrator | 2025-06-02 13:35:07.628880 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-02 13:35:07.628892 | orchestrator | Monday 02 June 2025 13:33:13 +0000 (0:00:22.077) 0:01:10.427 *********** 2025-06-02 13:35:07.628905 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 13:35:07.628919 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 13:35:07.628932 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 13:35:07.628945 | orchestrator | 2025-06-02 13:35:07.628964 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-02 13:35:07.628977 | orchestrator | Monday 02 June 2025 13:33:19 +0000 (0:00:05.885) 0:01:16.312 *********** 2025-06-02 13:35:07.628990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 13:35:07.629002 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 13:35:07.629015 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 13:35:07.629027 | orchestrator | 2025-06-02 13:35:07.629040 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-02 13:35:07.629053 | orchestrator | Monday 02 June 2025 13:33:23 +0000 (0:00:04.015) 0:01:20.328 *********** 2025-06-02 13:35:07.629066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.629093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.629108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.629122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629352 | orchestrator | 2025-06-02 13:35:07.629390 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-02 13:35:07.629411 | orchestrator | Monday 02 June 2025 13:33:26 +0000 (0:00:02.821) 0:01:23.149 *********** 2025-06-02 13:35:07.629431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.629467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.629480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.629492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.629726 | orchestrator | 2025-06-02 13:35:07.629737 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 13:35:07.629748 | orchestrator | Monday 02 June 2025 13:33:28 +0000 (0:00:02.222) 0:01:25.371 *********** 2025-06-02 13:35:07.629766 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:07.629786 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:07.629805 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:07.629823 | orchestrator | 2025-06-02 13:35:07.629842 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-02 13:35:07.629861 | orchestrator | Monday 02 June 2025 13:33:29 +0000 (0:00:00.416) 0:01:25.788 *********** 2025-06-02 13:35:07.629878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.629906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.629919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.629973 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:07.629985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.630008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.630080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630134 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:07.630145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 13:35:07.630170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 13:35:07.630183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:35:07.630237 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:07.630248 | orchestrator | 2025-06-02 13:35:07.630259 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-02 13:35:07.630270 | orchestrator | Monday 02 June 2025 13:33:30 +0000 (0:00:01.000) 0:01:26.788 *********** 2025-06-02 13:35:07.630281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.630304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.630323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 13:35:07.630335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:35:07.630588 | orchestrator | 2025-06-02 13:35:07.630599 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 13:35:07.630611 | orchestrator | Monday 02 June 2025 13:33:34 +0000 (0:00:04.224) 0:01:31.013 *********** 2025-06-02 13:35:07.630622 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:07.630633 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:07.630644 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:07.630654 | orchestrator | 2025-06-02 13:35:07.630665 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-02 13:35:07.630676 | orchestrator | Monday 02 June 2025 13:33:34 +0000 (0:00:00.264) 0:01:31.278 *********** 2025-06-02 13:35:07.630687 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-02 13:35:07.630697 | orchestrator | 2025-06-02 13:35:07.630708 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-02 13:35:07.630719 | orchestrator | Monday 02 June 2025 13:33:36 +0000 (0:00:02.367) 0:01:33.645 *********** 2025-06-02 13:35:07.630730 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:35:07.630747 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-02 13:35:07.630758 | orchestrator | 2025-06-02 13:35:07.630769 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-02 13:35:07.630780 | orchestrator | Monday 02 June 2025 13:33:39 +0000 (0:00:02.125) 0:01:35.771 *********** 2025-06-02 13:35:07.630790 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.630801 | orchestrator | 2025-06-02 13:35:07.630811 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 13:35:07.630822 | orchestrator | Monday 02 June 2025 13:33:58 +0000 (0:00:18.967) 0:01:54.738 *********** 2025-06-02 13:35:07.630833 | orchestrator | 2025-06-02 13:35:07.630843 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 13:35:07.630859 | orchestrator | Monday 02 June 2025 13:33:58 +0000 (0:00:00.136) 0:01:54.875 *********** 2025-06-02 13:35:07.630879 | orchestrator | 2025-06-02 13:35:07.630906 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 13:35:07.630928 | orchestrator | Monday 02 June 2025 13:33:58 +0000 (0:00:00.146) 0:01:55.021 *********** 2025-06-02 13:35:07.630945 | orchestrator | 2025-06-02 13:35:07.630956 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-02 13:35:07.630974 | orchestrator | Monday 02 June 2025 13:33:58 +0000 (0:00:00.121) 0:01:55.142 *********** 2025-06-02 13:35:07.630986 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.630996 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:07.631007 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:07.631017 | orchestrator | 2025-06-02 13:35:07.631029 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-02 13:35:07.631040 | orchestrator | Monday 02 June 2025 13:34:11 +0000 (0:00:13.015) 0:02:08.158 *********** 2025-06-02 13:35:07.631050 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.631061 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:07.631072 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:07.631083 | orchestrator | 2025-06-02 13:35:07.631094 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-02 13:35:07.631104 | orchestrator | Monday 02 June 2025 13:34:18 +0000 (0:00:07.171) 0:02:15.329 *********** 2025-06-02 13:35:07.631115 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.631126 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:07.631136 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:07.631147 | orchestrator | 2025-06-02 13:35:07.631158 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-02 13:35:07.631168 | orchestrator | Monday 02 June 2025 13:34:26 +0000 (0:00:07.519) 0:02:22.849 *********** 2025-06-02 13:35:07.631179 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:07.631190 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:07.631201 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.631211 | orchestrator | 2025-06-02 13:35:07.631222 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-02 13:35:07.631233 | orchestrator | Monday 02 June 2025 13:34:35 +0000 (0:00:09.417) 0:02:32.267 *********** 2025-06-02 13:35:07.631244 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:07.631254 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:07.631265 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.631275 | orchestrator | 2025-06-02 13:35:07.631286 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-02 13:35:07.631297 | orchestrator | Monday 02 June 2025 13:34:44 +0000 (0:00:09.227) 0:02:41.495 *********** 2025-06-02 13:35:07.631308 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.631318 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:07.631328 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:07.631339 | orchestrator | 2025-06-02 13:35:07.631350 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-02 13:35:07.631454 | orchestrator | Monday 02 June 2025 13:34:58 +0000 (0:00:13.243) 0:02:54.738 *********** 2025-06-02 13:35:07.631492 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:07.631505 | orchestrator | 2025-06-02 13:35:07.631516 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:35:07.631527 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 13:35:07.631540 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:35:07.631551 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:35:07.631563 | orchestrator | 2025-06-02 13:35:07.631573 | orchestrator | 2025-06-02 13:35:07.631584 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:35:07.631595 | orchestrator | Monday 02 June 2025 13:35:05 +0000 (0:00:06.945) 0:03:01.684 *********** 2025-06-02 13:35:07.631606 | orchestrator | =============================================================================== 2025-06-02 13:35:07.631616 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.08s 2025-06-02 13:35:07.631627 | orchestrator | designate : Running Designate bootstrap container ---------------------- 18.97s 2025-06-02 13:35:07.631638 | orchestrator | designate : Restart designate-worker container ------------------------- 13.24s 2025-06-02 13:35:07.631648 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.02s 2025-06-02 13:35:07.631659 | orchestrator | designate : Restart designate-producer container ------------------------ 9.42s 2025-06-02 13:35:07.631669 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.23s 2025-06-02 13:35:07.631680 | orchestrator | designate : Restart designate-central container ------------------------- 7.52s 2025-06-02 13:35:07.631691 | orchestrator | designate : Restart designate-api container ----------------------------- 7.17s 2025-06-02 13:35:07.631701 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.95s 2025-06-02 13:35:07.631710 | orchestrator | designate : Copying over config.json files for services ----------------- 6.52s 2025-06-02 13:35:07.631720 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.26s 2025-06-02 13:35:07.631729 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.00s 2025-06-02 13:35:07.631738 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.89s 2025-06-02 13:35:07.631748 | orchestrator | designate : Check designate containers ---------------------------------- 4.23s 2025-06-02 13:35:07.631757 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.12s 2025-06-02 13:35:07.631769 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.02s 2025-06-02 13:35:07.631793 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.96s 2025-06-02 13:35:07.631811 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.84s 2025-06-02 13:35:07.631828 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.56s 2025-06-02 13:35:07.631838 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.32s 2025-06-02 13:35:07.631855 | orchestrator | 2025-06-02 13:35:07 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:07.632292 | orchestrator | 2025-06-02 13:35:07 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:35:07.633938 | orchestrator | 2025-06-02 13:35:07 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:35:07.637040 | orchestrator | 2025-06-02 13:35:07 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:07.637748 | orchestrator | 2025-06-02 13:35:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:10.683534 | orchestrator | 2025-06-02 13:35:10 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:10.686155 | orchestrator | 2025-06-02 13:35:10 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:35:10.687079 | orchestrator | 2025-06-02 13:35:10 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state STARTED 2025-06-02 13:35:10.688005 | orchestrator | 2025-06-02 13:35:10 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:10.688028 | orchestrator | 2025-06-02 13:35:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:13.738677 | orchestrator | 2025-06-02 13:35:13 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:13.739884 | orchestrator | 2025-06-02 13:35:13 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:13.742395 | orchestrator | 2025-06-02 13:35:13 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:35:13.745815 | orchestrator | 2025-06-02 13:35:13.745851 | orchestrator | 2025-06-02 13:35:13.745862 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:35:13.745872 | orchestrator | 2025-06-02 13:35:13.745881 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:35:13.745890 | orchestrator | Monday 02 June 2025 13:34:04 +0000 (0:00:00.220) 0:00:00.220 *********** 2025-06-02 13:35:13.745900 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:35:13.745910 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:35:13.745919 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:35:13.745929 | orchestrator | 2025-06-02 13:35:13.745938 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:35:13.745947 | orchestrator | Monday 02 June 2025 13:34:04 +0000 (0:00:00.254) 0:00:00.475 *********** 2025-06-02 13:35:13.745957 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-02 13:35:13.745966 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-02 13:35:13.745975 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-02 13:35:13.745984 | orchestrator | 2025-06-02 13:35:13.745992 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-02 13:35:13.746001 | orchestrator | 2025-06-02 13:35:13.746010 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 13:35:13.746065 | orchestrator | Monday 02 June 2025 13:34:05 +0000 (0:00:00.348) 0:00:00.823 *********** 2025-06-02 13:35:13.746074 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:35:13.746084 | orchestrator | 2025-06-02 13:35:13.746093 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-02 13:35:13.746102 | orchestrator | Monday 02 June 2025 13:34:05 +0000 (0:00:00.511) 0:00:01.335 *********** 2025-06-02 13:35:13.746110 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-02 13:35:13.746119 | orchestrator | 2025-06-02 13:35:13.746128 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-02 13:35:13.746136 | orchestrator | Monday 02 June 2025 13:34:09 +0000 (0:00:03.373) 0:00:04.708 *********** 2025-06-02 13:35:13.746145 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-02 13:35:13.746155 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-02 13:35:13.746198 | orchestrator | 2025-06-02 13:35:13.746208 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-02 13:35:13.746216 | orchestrator | Monday 02 June 2025 13:34:15 +0000 (0:00:06.444) 0:00:11.153 *********** 2025-06-02 13:35:13.746226 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:35:13.746234 | orchestrator | 2025-06-02 13:35:13.746271 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-02 13:35:13.746280 | orchestrator | Monday 02 June 2025 13:34:18 +0000 (0:00:03.229) 0:00:14.382 *********** 2025-06-02 13:35:13.746288 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:35:13.746297 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-02 13:35:13.746305 | orchestrator | 2025-06-02 13:35:13.746314 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-02 13:35:13.746339 | orchestrator | Monday 02 June 2025 13:34:22 +0000 (0:00:03.790) 0:00:18.173 *********** 2025-06-02 13:35:13.746348 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:35:13.746357 | orchestrator | 2025-06-02 13:35:13.746384 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-02 13:35:13.746393 | orchestrator | Monday 02 June 2025 13:34:25 +0000 (0:00:03.349) 0:00:21.522 *********** 2025-06-02 13:35:13.746401 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-02 13:35:13.746410 | orchestrator | 2025-06-02 13:35:13.746420 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 13:35:13.746430 | orchestrator | Monday 02 June 2025 13:34:29 +0000 (0:00:04.156) 0:00:25.679 *********** 2025-06-02 13:35:13.746440 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:13.746450 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:13.746459 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:13.746469 | orchestrator | 2025-06-02 13:35:13.746479 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-02 13:35:13.746489 | orchestrator | Monday 02 June 2025 13:34:30 +0000 (0:00:00.314) 0:00:25.994 *********** 2025-06-02 13:35:13.746503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.746533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.746545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.746564 | orchestrator | 2025-06-02 13:35:13.746574 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-02 13:35:13.746584 | orchestrator | Monday 02 June 2025 13:34:31 +0000 (0:00:01.094) 0:00:27.088 *********** 2025-06-02 13:35:13.746594 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:13.746604 | orchestrator | 2025-06-02 13:35:13.746614 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-02 13:35:13.746624 | orchestrator | Monday 02 June 2025 13:34:31 +0000 (0:00:00.094) 0:00:27.183 *********** 2025-06-02 13:35:13.746634 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:13.746644 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:13.746654 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:13.746663 | orchestrator | 2025-06-02 13:35:13.746673 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 13:35:13.746688 | orchestrator | Monday 02 June 2025 13:34:31 +0000 (0:00:00.344) 0:00:27.527 *********** 2025-06-02 13:35:13.746698 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:35:13.746709 | orchestrator | 2025-06-02 13:35:13.746718 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-02 13:35:13.746729 | orchestrator | Monday 02 June 2025 13:34:32 +0000 (0:00:00.397) 0:00:27.924 *********** 2025-06-02 13:35:13.746739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.746759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.746771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.746787 | orchestrator | 2025-06-02 13:35:13.746802 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-02 13:35:13.746817 | orchestrator | Monday 02 June 2025 13:34:33 +0000 (0:00:01.327) 0:00:29.252 *********** 2025-06-02 13:35:13.746838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.746855 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:13.746870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.746885 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:13.746911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.746924 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:13.746933 | orchestrator | 2025-06-02 13:35:13.746941 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-02 13:35:13.746958 | orchestrator | Monday 02 June 2025 13:34:34 +0000 (0:00:00.553) 0:00:29.805 *********** 2025-06-02 13:35:13.746967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.746975 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:13.746984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.746998 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:13.747007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.747016 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:13.747024 | orchestrator | 2025-06-02 13:35:13.747033 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-02 13:35:13.747041 | orchestrator | Monday 02 June 2025 13:34:34 +0000 (0:00:00.603) 0:00:30.408 *********** 2025-06-02 13:35:13.747059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747093 | orchestrator | 2025-06-02 13:35:13.747101 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-02 13:35:13.747110 | orchestrator | Monday 02 June 2025 13:34:35 +0000 (0:00:01.246) 0:00:31.655 *********** 2025-06-02 13:35:13.747129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747171 | orchestrator | 2025-06-02 13:35:13.747179 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-02 13:35:13.747188 | orchestrator | Monday 02 June 2025 13:34:39 +0000 (0:00:03.471) 0:00:35.126 *********** 2025-06-02 13:35:13.747197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 13:35:13.747205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 13:35:13.747214 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 13:35:13.747223 | orchestrator | 2025-06-02 13:35:13.747231 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-02 13:35:13.747240 | orchestrator | Monday 02 June 2025 13:34:40 +0000 (0:00:01.393) 0:00:36.520 *********** 2025-06-02 13:35:13.747248 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:13.747257 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:13.747265 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:13.747274 | orchestrator | 2025-06-02 13:35:13.747282 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-02 13:35:13.747291 | orchestrator | Monday 02 June 2025 13:34:42 +0000 (0:00:01.247) 0:00:37.767 *********** 2025-06-02 13:35:13.747304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.747314 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:13.747323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.747338 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:13.747354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 13:35:13.747386 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:13.747395 | orchestrator | 2025-06-02 13:35:13.747403 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-02 13:35:13.747412 | orchestrator | Monday 02 June 2025 13:34:42 +0000 (0:00:00.484) 0:00:38.252 *********** 2025-06-02 13:35:13.747421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 13:35:13.747461 | orchestrator | 2025-06-02 13:35:13.747470 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-02 13:35:13.747478 | orchestrator | Monday 02 June 2025 13:34:43 +0000 (0:00:01.237) 0:00:39.489 *********** 2025-06-02 13:35:13.747487 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:13.747495 | orchestrator | 2025-06-02 13:35:13.747504 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-02 13:35:13.747512 | orchestrator | Monday 02 June 2025 13:34:45 +0000 (0:00:01.784) 0:00:41.274 *********** 2025-06-02 13:35:13.747521 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:13.747529 | orchestrator | 2025-06-02 13:35:13.747538 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-02 13:35:13.747546 | orchestrator | Monday 02 June 2025 13:34:48 +0000 (0:00:02.423) 0:00:43.698 *********** 2025-06-02 13:35:13.747560 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:13.747577 | orchestrator | 2025-06-02 13:35:13.747593 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 13:35:13.747608 | orchestrator | Monday 02 June 2025 13:35:01 +0000 (0:00:13.621) 0:00:57.319 *********** 2025-06-02 13:35:13.747622 | orchestrator | 2025-06-02 13:35:13.747637 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 13:35:13.747653 | orchestrator | Monday 02 June 2025 13:35:01 +0000 (0:00:00.062) 0:00:57.381 *********** 2025-06-02 13:35:13.747669 | orchestrator | 2025-06-02 13:35:13.747684 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 13:35:13.747693 | orchestrator | Monday 02 June 2025 13:35:01 +0000 (0:00:00.058) 0:00:57.439 *********** 2025-06-02 13:35:13.747701 | orchestrator | 2025-06-02 13:35:13.747715 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-02 13:35:13.747730 | orchestrator | Monday 02 June 2025 13:35:01 +0000 (0:00:00.064) 0:00:57.504 *********** 2025-06-02 13:35:13.747744 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:13.747759 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:13.747773 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:13.747789 | orchestrator | 2025-06-02 13:35:13.747805 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:35:13.747821 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:35:13.747837 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:35:13.747852 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:35:13.747867 | orchestrator | 2025-06-02 13:35:13.747880 | orchestrator | 2025-06-02 13:35:13.747893 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:35:13.747907 | orchestrator | Monday 02 June 2025 13:35:12 +0000 (0:00:10.398) 0:01:07.902 *********** 2025-06-02 13:35:13.747920 | orchestrator | =============================================================================== 2025-06-02 13:35:13.747933 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.62s 2025-06-02 13:35:13.747946 | orchestrator | placement : Restart placement-api container ---------------------------- 10.40s 2025-06-02 13:35:13.747959 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.44s 2025-06-02 13:35:13.747972 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.16s 2025-06-02 13:35:13.747996 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.79s 2025-06-02 13:35:13.748010 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.47s 2025-06-02 13:35:13.748023 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.37s 2025-06-02 13:35:13.748037 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.35s 2025-06-02 13:35:13.748058 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.23s 2025-06-02 13:35:13.748072 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.42s 2025-06-02 13:35:13.748086 | orchestrator | placement : Creating placement databases -------------------------------- 1.78s 2025-06-02 13:35:13.748100 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.39s 2025-06-02 13:35:13.748112 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.33s 2025-06-02 13:35:13.748126 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.25s 2025-06-02 13:35:13.748140 | orchestrator | placement : Copying over config.json files for services ----------------- 1.25s 2025-06-02 13:35:13.748154 | orchestrator | placement : Check placement containers ---------------------------------- 1.24s 2025-06-02 13:35:13.748168 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.09s 2025-06-02 13:35:13.748183 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.60s 2025-06-02 13:35:13.748198 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.55s 2025-06-02 13:35:13.748213 | orchestrator | placement : include_tasks ----------------------------------------------- 0.51s 2025-06-02 13:35:13.748229 | orchestrator | 2025-06-02 13:35:13 | INFO  | Task 353121b9-1fec-4357-a009-3cb5c24e5411 is in state SUCCESS 2025-06-02 13:35:13.748400 | orchestrator | 2025-06-02 13:35:13 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:13.748426 | orchestrator | 2025-06-02 13:35:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:16.809913 | orchestrator | 2025-06-02 13:35:16 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:16.813868 | orchestrator | 2025-06-02 13:35:16 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:16.824105 | orchestrator | 2025-06-02 13:35:16 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state STARTED 2025-06-02 13:35:16.826710 | orchestrator | 2025-06-02 13:35:16 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:16.828501 | orchestrator | 2025-06-02 13:35:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:19.882519 | orchestrator | 2025-06-02 13:35:19 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:19.883787 | orchestrator | 2025-06-02 13:35:19 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:19.887783 | orchestrator | 2025-06-02 13:35:19 | INFO  | Task 44c24d1c-a336-47f6-ab6b-b48720d2962a is in state SUCCESS 2025-06-02 13:35:19.888100 | orchestrator | 2025-06-02 13:35:19.890597 | orchestrator | 2025-06-02 13:35:19.890708 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:35:19.890722 | orchestrator | 2025-06-02 13:35:19.890733 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:35:19.890746 | orchestrator | Monday 02 June 2025 13:30:55 +0000 (0:00:00.256) 0:00:00.256 *********** 2025-06-02 13:35:19.890757 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:35:19.890770 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:35:19.890780 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:35:19.890791 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:35:19.890802 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:35:19.890873 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:35:19.890887 | orchestrator | 2025-06-02 13:35:19.890899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:35:19.890914 | orchestrator | Monday 02 June 2025 13:30:55 +0000 (0:00:00.659) 0:00:00.916 *********** 2025-06-02 13:35:19.890932 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-02 13:35:19.890946 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-02 13:35:19.890972 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-02 13:35:19.890983 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-02 13:35:19.890994 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-02 13:35:19.891004 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-02 13:35:19.891015 | orchestrator | 2025-06-02 13:35:19.891042 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-02 13:35:19.891053 | orchestrator | 2025-06-02 13:35:19.891064 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 13:35:19.891075 | orchestrator | Monday 02 June 2025 13:30:56 +0000 (0:00:00.555) 0:00:01.471 *********** 2025-06-02 13:35:19.891087 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:35:19.891099 | orchestrator | 2025-06-02 13:35:19.891110 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-02 13:35:19.891121 | orchestrator | Monday 02 June 2025 13:30:57 +0000 (0:00:01.157) 0:00:02.629 *********** 2025-06-02 13:35:19.891131 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:35:19.891142 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:35:19.891153 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:35:19.891166 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:35:19.891178 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:35:19.891190 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:35:19.891202 | orchestrator | 2025-06-02 13:35:19.891215 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-02 13:35:19.891229 | orchestrator | Monday 02 June 2025 13:30:59 +0000 (0:00:01.868) 0:00:04.498 *********** 2025-06-02 13:35:19.891256 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:35:19.891268 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:35:19.891280 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:35:19.891293 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:35:19.891305 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:35:19.891317 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:35:19.891329 | orchestrator | 2025-06-02 13:35:19.891342 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-02 13:35:19.891354 | orchestrator | Monday 02 June 2025 13:31:00 +0000 (0:00:01.180) 0:00:05.678 *********** 2025-06-02 13:35:19.891395 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 13:35:19.891408 | orchestrator |  "changed": false, 2025-06-02 13:35:19.891421 | orchestrator |  "msg": "All assertions passed" 2025-06-02 13:35:19.891434 | orchestrator | } 2025-06-02 13:35:19.891447 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 13:35:19.891460 | orchestrator |  "changed": false, 2025-06-02 13:35:19.891472 | orchestrator |  "msg": "All assertions passed" 2025-06-02 13:35:19.891484 | orchestrator | } 2025-06-02 13:35:19.891496 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 13:35:19.891510 | orchestrator |  "changed": false, 2025-06-02 13:35:19.891521 | orchestrator |  "msg": "All assertions passed" 2025-06-02 13:35:19.891532 | orchestrator | } 2025-06-02 13:35:19.891543 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 13:35:19.891553 | orchestrator |  "changed": false, 2025-06-02 13:35:19.891564 | orchestrator |  "msg": "All assertions passed" 2025-06-02 13:35:19.891575 | orchestrator | } 2025-06-02 13:35:19.891585 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 13:35:19.891596 | orchestrator |  "changed": false, 2025-06-02 13:35:19.891607 | orchestrator |  "msg": "All assertions passed" 2025-06-02 13:35:19.891627 | orchestrator | } 2025-06-02 13:35:19.891638 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 13:35:19.891649 | orchestrator |  "changed": false, 2025-06-02 13:35:19.891660 | orchestrator |  "msg": "All assertions passed" 2025-06-02 13:35:19.891670 | orchestrator | } 2025-06-02 13:35:19.891681 | orchestrator | 2025-06-02 13:35:19.891692 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-02 13:35:19.891703 | orchestrator | Monday 02 June 2025 13:31:01 +0000 (0:00:00.853) 0:00:06.532 *********** 2025-06-02 13:35:19.891713 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.891724 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.891735 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.891745 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.891756 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.891766 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.891777 | orchestrator | 2025-06-02 13:35:19.891788 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-02 13:35:19.891798 | orchestrator | Monday 02 June 2025 13:31:02 +0000 (0:00:00.568) 0:00:07.100 *********** 2025-06-02 13:35:19.891809 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-02 13:35:19.891820 | orchestrator | 2025-06-02 13:35:19.891830 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-02 13:35:19.891841 | orchestrator | Monday 02 June 2025 13:31:05 +0000 (0:00:03.240) 0:00:10.340 *********** 2025-06-02 13:35:19.891852 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-02 13:35:19.891864 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-02 13:35:19.891874 | orchestrator | 2025-06-02 13:35:19.891899 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-02 13:35:19.891910 | orchestrator | Monday 02 June 2025 13:31:11 +0000 (0:00:06.276) 0:00:16.617 *********** 2025-06-02 13:35:19.891921 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:35:19.891932 | orchestrator | 2025-06-02 13:35:19.891942 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-02 13:35:19.891953 | orchestrator | Monday 02 June 2025 13:31:14 +0000 (0:00:02.972) 0:00:19.590 *********** 2025-06-02 13:35:19.891964 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:35:19.891975 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-02 13:35:19.891985 | orchestrator | 2025-06-02 13:35:19.891996 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-02 13:35:19.892007 | orchestrator | Monday 02 June 2025 13:31:18 +0000 (0:00:03.629) 0:00:23.219 *********** 2025-06-02 13:35:19.892017 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:35:19.892033 | orchestrator | 2025-06-02 13:35:19.892049 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-02 13:35:19.892061 | orchestrator | Monday 02 June 2025 13:31:21 +0000 (0:00:03.203) 0:00:26.423 *********** 2025-06-02 13:35:19.892071 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-02 13:35:19.892082 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-02 13:35:19.892093 | orchestrator | 2025-06-02 13:35:19.892103 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 13:35:19.892114 | orchestrator | Monday 02 June 2025 13:31:28 +0000 (0:00:07.228) 0:00:33.651 *********** 2025-06-02 13:35:19.892125 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.892135 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.892146 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.892157 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.892167 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.892178 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.892189 | orchestrator | 2025-06-02 13:35:19.892199 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-02 13:35:19.892216 | orchestrator | Monday 02 June 2025 13:31:29 +0000 (0:00:00.762) 0:00:34.414 *********** 2025-06-02 13:35:19.892227 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.892238 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.892249 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.892259 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.892270 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.892280 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.892291 | orchestrator | 2025-06-02 13:35:19.892302 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-02 13:35:19.892313 | orchestrator | Monday 02 June 2025 13:31:31 +0000 (0:00:01.952) 0:00:36.366 *********** 2025-06-02 13:35:19.892329 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:35:19.892340 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:35:19.892350 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:35:19.892380 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:35:19.892391 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:35:19.892402 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:35:19.892413 | orchestrator | 2025-06-02 13:35:19.892423 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 13:35:19.892434 | orchestrator | Monday 02 June 2025 13:31:32 +0000 (0:00:01.107) 0:00:37.473 *********** 2025-06-02 13:35:19.892445 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.892455 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.892466 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.892477 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.892487 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.892498 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.892509 | orchestrator | 2025-06-02 13:35:19.892519 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-02 13:35:19.892530 | orchestrator | Monday 02 June 2025 13:31:34 +0000 (0:00:02.004) 0:00:39.478 *********** 2025-06-02 13:35:19.892545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.892571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.892584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.892608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.892622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.892633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.892644 | orchestrator | 2025-06-02 13:35:19.892656 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-02 13:35:19.892667 | orchestrator | Monday 02 June 2025 13:31:37 +0000 (0:00:02.734) 0:00:42.212 *********** 2025-06-02 13:35:19.892678 | orchestrator | [WARNING]: Skipped 2025-06-02 13:35:19.892689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-02 13:35:19.892700 | orchestrator | due to this access issue: 2025-06-02 13:35:19.892710 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-02 13:35:19.892721 | orchestrator | a directory 2025-06-02 13:35:19.892732 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:35:19.892743 | orchestrator | 2025-06-02 13:35:19.892759 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 13:35:19.892770 | orchestrator | Monday 02 June 2025 13:31:38 +0000 (0:00:00.832) 0:00:43.045 *********** 2025-06-02 13:35:19.892788 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:35:19.892800 | orchestrator | 2025-06-02 13:35:19.892811 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-02 13:35:19.892822 | orchestrator | Monday 02 June 2025 13:31:39 +0000 (0:00:01.231) 0:00:44.276 *********** 2025-06-02 13:35:19.892833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.892849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.892861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.892872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.892893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.892911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.892923 | orchestrator | 2025-06-02 13:35:19.892934 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-02 13:35:19.892945 | orchestrator | Monday 02 June 2025 13:31:43 +0000 (0:00:03.772) 0:00:48.049 *********** 2025-06-02 13:35:19.892961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.892972 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.892984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.892995 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.893014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.893031 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.893042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.893053 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.893065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.893076 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.893091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.893103 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.893114 | orchestrator | 2025-06-02 13:35:19.893124 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-02 13:35:19.893135 | orchestrator | Monday 02 June 2025 13:31:46 +0000 (0:00:03.271) 0:00:51.320 *********** 2025-06-02 13:35:19.893146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.893164 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.893181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.893193 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.893204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.893215 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.893237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.893248 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.893260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.893277 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.893288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.893300 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.893310 | orchestrator | 2025-06-02 13:35:19.893321 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-02 13:35:19.893338 | orchestrator | Monday 02 June 2025 13:31:50 +0000 (0:00:03.713) 0:00:55.034 *********** 2025-06-02 13:35:19.893349 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.893389 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.893401 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.893411 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.893422 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.893433 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.893443 | orchestrator | 2025-06-02 13:35:19.893454 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-02 13:35:19.893465 | orchestrator | Monday 02 June 2025 13:31:52 +0000 (0:00:02.589) 0:00:57.623 *********** 2025-06-02 13:35:19.893475 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.893486 | orchestrator | 2025-06-02 13:35:19.893497 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-02 13:35:19.893508 | orchestrator | Monday 02 June 2025 13:31:52 +0000 (0:00:00.118) 0:00:57.742 *********** 2025-06-02 13:35:19.893518 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.893529 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.893540 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.893550 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.893561 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.893571 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.893582 | orchestrator | 2025-06-02 13:35:19.893593 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-02 13:35:19.893604 | orchestrator | Monday 02 June 2025 13:31:53 +0000 (0:00:00.765) 0:00:58.507 *********** 2025-06-02 13:35:19.893620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.893632 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.893643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.893661 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.893672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.893683 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.894157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894181 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.894263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894279 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.894298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894320 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.894331 | orchestrator | 2025-06-02 13:35:19.894342 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-02 13:35:19.894352 | orchestrator | Monday 02 June 2025 13:31:56 +0000 (0:00:03.197) 0:01:01.705 *********** 2025-06-02 13:35:19.894388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.894455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.894467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.894479 | orchestrator | 2025-06-02 13:35:19.894489 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-02 13:35:19.894500 | orchestrator | Monday 02 June 2025 13:32:00 +0000 (0:00:04.164) 0:01:05.870 *********** 2025-06-02 13:35:19.894517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.894576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.894593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.894605 | orchestrator | 2025-06-02 13:35:19.894616 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-02 13:35:19.894627 | orchestrator | Monday 02 June 2025 13:32:07 +0000 (0:00:06.371) 0:01:12.241 *********** 2025-06-02 13:35:19.894638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894649 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.894661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.894690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894701 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.894712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.894802 | orchestrator | 2025-06-02 13:35:19.894813 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-02 13:35:19.894824 | orchestrator | Monday 02 June 2025 13:32:10 +0000 (0:00:03.683) 0:01:15.925 *********** 2025-06-02 13:35:19.894835 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.894845 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.894856 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.894867 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:19.894877 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:19.894888 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:19.894899 | orchestrator | 2025-06-02 13:35:19.894912 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-02 13:35:19.894929 | orchestrator | Monday 02 June 2025 13:32:14 +0000 (0:00:03.394) 0:01:19.320 *********** 2025-06-02 13:35:19.894942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894954 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.894967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.894979 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.894998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.895011 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.895054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.895068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.895081 | orchestrator | 2025-06-02 13:35:19.895093 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-02 13:35:19.895105 | orchestrator | Monday 02 June 2025 13:32:17 +0000 (0:00:03.633) 0:01:22.954 *********** 2025-06-02 13:35:19.895118 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895130 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895142 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.895154 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.895166 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895178 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895190 | orchestrator | 2025-06-02 13:35:19.895201 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-02 13:35:19.895214 | orchestrator | Monday 02 June 2025 13:32:20 +0000 (0:00:02.423) 0:01:25.377 *********** 2025-06-02 13:35:19.895227 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895239 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895251 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895262 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.895273 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.895283 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895294 | orchestrator | 2025-06-02 13:35:19.895304 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-02 13:35:19.895315 | orchestrator | Monday 02 June 2025 13:32:23 +0000 (0:00:02.845) 0:01:28.222 *********** 2025-06-02 13:35:19.895333 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895344 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895354 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895388 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.895399 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895410 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.895420 | orchestrator | 2025-06-02 13:35:19.895431 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-02 13:35:19.895442 | orchestrator | Monday 02 June 2025 13:32:25 +0000 (0:00:02.145) 0:01:30.367 *********** 2025-06-02 13:35:19.895453 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895463 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895474 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.895495 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.895505 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895516 | orchestrator | 2025-06-02 13:35:19.895526 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-02 13:35:19.895537 | orchestrator | Monday 02 June 2025 13:32:27 +0000 (0:00:02.419) 0:01:32.787 *********** 2025-06-02 13:35:19.895548 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895558 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895569 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895579 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.895590 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.895600 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895611 | orchestrator | 2025-06-02 13:35:19.895621 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-02 13:35:19.895632 | orchestrator | Monday 02 June 2025 13:32:30 +0000 (0:00:02.421) 0:01:35.209 *********** 2025-06-02 13:35:19.895643 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895653 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895664 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.895675 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.895685 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895696 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895706 | orchestrator | 2025-06-02 13:35:19.895717 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-02 13:35:19.895728 | orchestrator | Monday 02 June 2025 13:32:34 +0000 (0:00:04.024) 0:01:39.234 *********** 2025-06-02 13:35:19.895739 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 13:35:19.895750 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 13:35:19.895760 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895771 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.895781 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 13:35:19.895792 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.895808 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 13:35:19.895819 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895830 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 13:35:19.895841 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895852 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 13:35:19.895863 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.895873 | orchestrator | 2025-06-02 13:35:19.895884 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-02 13:35:19.895894 | orchestrator | Monday 02 June 2025 13:32:36 +0000 (0:00:02.451) 0:01:41.685 *********** 2025-06-02 13:35:19.895913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.895924 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.895941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.895953 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.895964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.895975 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.895986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.896002 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.896031 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.896053 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896064 | orchestrator | 2025-06-02 13:35:19.896075 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-02 13:35:19.896086 | orchestrator | Monday 02 June 2025 13:32:38 +0000 (0:00:02.039) 0:01:43.724 *********** 2025-06-02 13:35:19.896103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.896115 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.896137 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.896171 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.896193 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.896216 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.896246 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896257 | orchestrator | 2025-06-02 13:35:19.896267 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-02 13:35:19.896278 | orchestrator | Monday 02 June 2025 13:32:41 +0000 (0:00:02.749) 0:01:46.474 *********** 2025-06-02 13:35:19.896289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896300 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896311 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896321 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896332 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896343 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896353 | orchestrator | 2025-06-02 13:35:19.896379 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-02 13:35:19.896390 | orchestrator | Monday 02 June 2025 13:32:45 +0000 (0:00:03.598) 0:01:50.073 *********** 2025-06-02 13:35:19.896408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896418 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896429 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896440 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:35:19.896450 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:35:19.896461 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:35:19.896471 | orchestrator | 2025-06-02 13:35:19.896482 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-02 13:35:19.896493 | orchestrator | Monday 02 June 2025 13:32:48 +0000 (0:00:03.783) 0:01:53.857 *********** 2025-06-02 13:35:19.896503 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896514 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896525 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896535 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896546 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896562 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896573 | orchestrator | 2025-06-02 13:35:19.896583 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-02 13:35:19.896594 | orchestrator | Monday 02 June 2025 13:32:51 +0000 (0:00:02.222) 0:01:56.080 *********** 2025-06-02 13:35:19.896605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896616 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896626 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896637 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896648 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896659 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896669 | orchestrator | 2025-06-02 13:35:19.896680 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-02 13:35:19.896691 | orchestrator | Monday 02 June 2025 13:32:54 +0000 (0:00:03.593) 0:01:59.674 *********** 2025-06-02 13:35:19.896702 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896712 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896723 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896734 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896744 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896755 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896766 | orchestrator | 2025-06-02 13:35:19.896776 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-02 13:35:19.896787 | orchestrator | Monday 02 June 2025 13:32:58 +0000 (0:00:03.799) 0:02:03.473 *********** 2025-06-02 13:35:19.896798 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896808 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896819 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896830 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896840 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896851 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896862 | orchestrator | 2025-06-02 13:35:19.896872 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-02 13:35:19.896883 | orchestrator | Monday 02 June 2025 13:33:00 +0000 (0:00:02.528) 0:02:06.002 *********** 2025-06-02 13:35:19.896894 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.896904 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.896915 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.896925 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.896936 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.896947 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.896957 | orchestrator | 2025-06-02 13:35:19.896968 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-02 13:35:19.896979 | orchestrator | Monday 02 June 2025 13:33:05 +0000 (0:00:04.463) 0:02:10.466 *********** 2025-06-02 13:35:19.896990 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.897000 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.897018 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.897029 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.897040 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.897050 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.897061 | orchestrator | 2025-06-02 13:35:19.897072 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-02 13:35:19.897083 | orchestrator | Monday 02 June 2025 13:33:08 +0000 (0:00:03.434) 0:02:13.900 *********** 2025-06-02 13:35:19.897093 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.897111 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.897122 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.897133 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.897144 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.897154 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.897165 | orchestrator | 2025-06-02 13:35:19.897176 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-02 13:35:19.897186 | orchestrator | Monday 02 June 2025 13:33:12 +0000 (0:00:03.536) 0:02:17.437 *********** 2025-06-02 13:35:19.897197 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.897208 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.897218 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.897229 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.897240 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.897250 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.897261 | orchestrator | 2025-06-02 13:35:19.897272 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-02 13:35:19.897282 | orchestrator | Monday 02 June 2025 13:33:14 +0000 (0:00:02.555) 0:02:19.992 *********** 2025-06-02 13:35:19.897293 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 13:35:19.897305 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.897315 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 13:35:19.897326 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.897337 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 13:35:19.897348 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.897410 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 13:35:19.897423 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.897434 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 13:35:19.897445 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.897456 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 13:35:19.897466 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.897477 | orchestrator | 2025-06-02 13:35:19.897488 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-02 13:35:19.897499 | orchestrator | Monday 02 June 2025 13:33:18 +0000 (0:00:03.928) 0:02:23.921 *********** 2025-06-02 13:35:19.897515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.897538 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.897550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.897561 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.897581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 13:35:19.897592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.897603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.897614 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.897630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.897642 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.897653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 13:35:19.897672 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.897683 | orchestrator | 2025-06-02 13:35:19.897694 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-02 13:35:19.897705 | orchestrator | Monday 02 June 2025 13:33:21 +0000 (0:00:02.657) 0:02:26.579 *********** 2025-06-02 13:35:19.897716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.897735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.897747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.897767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 13:35:19.897785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.897797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 13:35:19.897807 | orchestrator | 2025-06-02 13:35:19.897817 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 13:35:19.897832 | orchestrator | Monday 02 June 2025 13:33:24 +0000 (0:00:03.329) 0:02:29.909 *********** 2025-06-02 13:35:19.897842 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:35:19.897851 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:35:19.897861 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:35:19.897870 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:35:19.897880 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:35:19.897889 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:35:19.897899 | orchestrator | 2025-06-02 13:35:19.897908 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-02 13:35:19.897918 | orchestrator | Monday 02 June 2025 13:33:25 +0000 (0:00:00.712) 0:02:30.621 *********** 2025-06-02 13:35:19.897928 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:19.897937 | orchestrator | 2025-06-02 13:35:19.897947 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-02 13:35:19.897957 | orchestrator | Monday 02 June 2025 13:33:27 +0000 (0:00:02.032) 0:02:32.653 *********** 2025-06-02 13:35:19.897966 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:19.897976 | orchestrator | 2025-06-02 13:35:19.897985 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-02 13:35:19.897995 | orchestrator | Monday 02 June 2025 13:33:29 +0000 (0:00:01.924) 0:02:34.578 *********** 2025-06-02 13:35:19.898004 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:19.898014 | orchestrator | 2025-06-02 13:35:19.898070 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 13:35:19.898081 | orchestrator | Monday 02 June 2025 13:34:19 +0000 (0:00:49.876) 0:03:24.454 *********** 2025-06-02 13:35:19.898090 | orchestrator | 2025-06-02 13:35:19.898100 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 13:35:19.898110 | orchestrator | Monday 02 June 2025 13:34:19 +0000 (0:00:00.094) 0:03:24.548 *********** 2025-06-02 13:35:19.898125 | orchestrator | 2025-06-02 13:35:19.898135 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 13:35:19.898145 | orchestrator | Monday 02 June 2025 13:34:19 +0000 (0:00:00.196) 0:03:24.744 *********** 2025-06-02 13:35:19.898154 | orchestrator | 2025-06-02 13:35:19.898164 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 13:35:19.898174 | orchestrator | Monday 02 June 2025 13:34:19 +0000 (0:00:00.060) 0:03:24.805 *********** 2025-06-02 13:35:19.898183 | orchestrator | 2025-06-02 13:35:19.898192 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 13:35:19.898202 | orchestrator | Monday 02 June 2025 13:34:19 +0000 (0:00:00.156) 0:03:24.961 *********** 2025-06-02 13:35:19.898211 | orchestrator | 2025-06-02 13:35:19.898221 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 13:35:19.898230 | orchestrator | Monday 02 June 2025 13:34:20 +0000 (0:00:00.123) 0:03:25.085 *********** 2025-06-02 13:35:19.898240 | orchestrator | 2025-06-02 13:35:19.898254 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-02 13:35:19.898264 | orchestrator | Monday 02 June 2025 13:34:20 +0000 (0:00:00.129) 0:03:25.214 *********** 2025-06-02 13:35:19.898274 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:35:19.898284 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:35:19.898293 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:35:19.898303 | orchestrator | 2025-06-02 13:35:19.898312 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-02 13:35:19.898322 | orchestrator | Monday 02 June 2025 13:34:46 +0000 (0:00:26.496) 0:03:51.710 *********** 2025-06-02 13:35:19.898331 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:35:19.898341 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:35:19.898350 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:35:19.898376 | orchestrator | 2025-06-02 13:35:19.898386 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:35:19.898396 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 13:35:19.898407 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 13:35:19.898417 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 13:35:19.898427 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 13:35:19.898436 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 13:35:19.898446 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 13:35:19.898456 | orchestrator | 2025-06-02 13:35:19.898465 | orchestrator | 2025-06-02 13:35:19.898475 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:35:19.898484 | orchestrator | Monday 02 June 2025 13:35:16 +0000 (0:00:29.901) 0:04:21.612 *********** 2025-06-02 13:35:19.898494 | orchestrator | =============================================================================== 2025-06-02 13:35:19.898504 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 49.88s 2025-06-02 13:35:19.898513 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 29.90s 2025-06-02 13:35:19.898522 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.50s 2025-06-02 13:35:19.898532 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.23s 2025-06-02 13:35:19.898548 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.37s 2025-06-02 13:35:19.898564 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.28s 2025-06-02 13:35:19.898574 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.46s 2025-06-02 13:35:19.898583 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.16s 2025-06-02 13:35:19.898593 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 4.02s 2025-06-02 13:35:19.898602 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.93s 2025-06-02 13:35:19.898612 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.80s 2025-06-02 13:35:19.898621 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.78s 2025-06-02 13:35:19.898631 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.77s 2025-06-02 13:35:19.898641 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.71s 2025-06-02 13:35:19.898650 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.68s 2025-06-02 13:35:19.898660 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.63s 2025-06-02 13:35:19.898669 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.63s 2025-06-02 13:35:19.898679 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 3.60s 2025-06-02 13:35:19.898689 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.59s 2025-06-02 13:35:19.898698 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 3.54s 2025-06-02 13:35:19.898707 | orchestrator | 2025-06-02 13:35:19 | INFO  | Task 380cf52a-52f8-4b85-b2ac-8dd1977b04c7 is in state STARTED 2025-06-02 13:35:19.898718 | orchestrator | 2025-06-02 13:35:19 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:19.898727 | orchestrator | 2025-06-02 13:35:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:22.927198 | orchestrator | 2025-06-02 13:35:22 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:22.929031 | orchestrator | 2025-06-02 13:35:22 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:22.930648 | orchestrator | 2025-06-02 13:35:22 | INFO  | Task 380cf52a-52f8-4b85-b2ac-8dd1977b04c7 is in state STARTED 2025-06-02 13:35:22.932692 | orchestrator | 2025-06-02 13:35:22 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:22.932748 | orchestrator | 2025-06-02 13:35:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:25.984950 | orchestrator | 2025-06-02 13:35:25 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:25.985247 | orchestrator | 2025-06-02 13:35:25 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:25.985811 | orchestrator | 2025-06-02 13:35:25 | INFO  | Task 380cf52a-52f8-4b85-b2ac-8dd1977b04c7 is in state SUCCESS 2025-06-02 13:35:25.986902 | orchestrator | 2025-06-02 13:35:25 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:25.989187 | orchestrator | 2025-06-02 13:35:25 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:25.989416 | orchestrator | 2025-06-02 13:35:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:29.036816 | orchestrator | 2025-06-02 13:35:29 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:29.037611 | orchestrator | 2025-06-02 13:35:29 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:29.038876 | orchestrator | 2025-06-02 13:35:29 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:29.039608 | orchestrator | 2025-06-02 13:35:29 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:29.039848 | orchestrator | 2025-06-02 13:35:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:32.084696 | orchestrator | 2025-06-02 13:35:32 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:32.084912 | orchestrator | 2025-06-02 13:35:32 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:32.084946 | orchestrator | 2025-06-02 13:35:32 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:32.086254 | orchestrator | 2025-06-02 13:35:32 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:32.086290 | orchestrator | 2025-06-02 13:35:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:35.140102 | orchestrator | 2025-06-02 13:35:35 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:35.142187 | orchestrator | 2025-06-02 13:35:35 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:35.143971 | orchestrator | 2025-06-02 13:35:35 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:35.146556 | orchestrator | 2025-06-02 13:35:35 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:35.146776 | orchestrator | 2025-06-02 13:35:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:38.199309 | orchestrator | 2025-06-02 13:35:38 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:38.201625 | orchestrator | 2025-06-02 13:35:38 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:38.203490 | orchestrator | 2025-06-02 13:35:38 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:38.205527 | orchestrator | 2025-06-02 13:35:38 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:38.205570 | orchestrator | 2025-06-02 13:35:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:41.246568 | orchestrator | 2025-06-02 13:35:41 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:41.247717 | orchestrator | 2025-06-02 13:35:41 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:41.248993 | orchestrator | 2025-06-02 13:35:41 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:41.249933 | orchestrator | 2025-06-02 13:35:41 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:41.250145 | orchestrator | 2025-06-02 13:35:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:44.291941 | orchestrator | 2025-06-02 13:35:44 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:44.294402 | orchestrator | 2025-06-02 13:35:44 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:44.295694 | orchestrator | 2025-06-02 13:35:44 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:44.297600 | orchestrator | 2025-06-02 13:35:44 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:44.297793 | orchestrator | 2025-06-02 13:35:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:47.341646 | orchestrator | 2025-06-02 13:35:47 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:47.343133 | orchestrator | 2025-06-02 13:35:47 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:47.343201 | orchestrator | 2025-06-02 13:35:47 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:47.345963 | orchestrator | 2025-06-02 13:35:47 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:47.346138 | orchestrator | 2025-06-02 13:35:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:50.396112 | orchestrator | 2025-06-02 13:35:50 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:50.396221 | orchestrator | 2025-06-02 13:35:50 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:50.396236 | orchestrator | 2025-06-02 13:35:50 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:50.396248 | orchestrator | 2025-06-02 13:35:50 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:50.396259 | orchestrator | 2025-06-02 13:35:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:53.435924 | orchestrator | 2025-06-02 13:35:53 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:53.438243 | orchestrator | 2025-06-02 13:35:53 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:53.438488 | orchestrator | 2025-06-02 13:35:53 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:53.438860 | orchestrator | 2025-06-02 13:35:53 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:53.438901 | orchestrator | 2025-06-02 13:35:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:56.479566 | orchestrator | 2025-06-02 13:35:56 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:56.480573 | orchestrator | 2025-06-02 13:35:56 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:56.480606 | orchestrator | 2025-06-02 13:35:56 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:56.482165 | orchestrator | 2025-06-02 13:35:56 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:56.482191 | orchestrator | 2025-06-02 13:35:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:35:59.525726 | orchestrator | 2025-06-02 13:35:59 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:35:59.526015 | orchestrator | 2025-06-02 13:35:59 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:35:59.526749 | orchestrator | 2025-06-02 13:35:59 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:35:59.527473 | orchestrator | 2025-06-02 13:35:59 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:35:59.527496 | orchestrator | 2025-06-02 13:35:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:02.559631 | orchestrator | 2025-06-02 13:36:02 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:02.561609 | orchestrator | 2025-06-02 13:36:02 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:02.563456 | orchestrator | 2025-06-02 13:36:02 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:02.564901 | orchestrator | 2025-06-02 13:36:02 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:02.565300 | orchestrator | 2025-06-02 13:36:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:05.606553 | orchestrator | 2025-06-02 13:36:05 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:05.608485 | orchestrator | 2025-06-02 13:36:05 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:05.609778 | orchestrator | 2025-06-02 13:36:05 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:05.610982 | orchestrator | 2025-06-02 13:36:05 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:05.611011 | orchestrator | 2025-06-02 13:36:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:08.650978 | orchestrator | 2025-06-02 13:36:08 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:08.652587 | orchestrator | 2025-06-02 13:36:08 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:08.654701 | orchestrator | 2025-06-02 13:36:08 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:08.656527 | orchestrator | 2025-06-02 13:36:08 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:08.656608 | orchestrator | 2025-06-02 13:36:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:11.701634 | orchestrator | 2025-06-02 13:36:11 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:11.703887 | orchestrator | 2025-06-02 13:36:11 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:11.705895 | orchestrator | 2025-06-02 13:36:11 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:11.707866 | orchestrator | 2025-06-02 13:36:11 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:11.708034 | orchestrator | 2025-06-02 13:36:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:14.761917 | orchestrator | 2025-06-02 13:36:14 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:14.763492 | orchestrator | 2025-06-02 13:36:14 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:14.766257 | orchestrator | 2025-06-02 13:36:14 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:14.767955 | orchestrator | 2025-06-02 13:36:14 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:14.768070 | orchestrator | 2025-06-02 13:36:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:17.815111 | orchestrator | 2025-06-02 13:36:17 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:17.815216 | orchestrator | 2025-06-02 13:36:17 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:17.815501 | orchestrator | 2025-06-02 13:36:17 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:17.816486 | orchestrator | 2025-06-02 13:36:17 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:17.816515 | orchestrator | 2025-06-02 13:36:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:20.853954 | orchestrator | 2025-06-02 13:36:20 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:20.854260 | orchestrator | 2025-06-02 13:36:20 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:20.855216 | orchestrator | 2025-06-02 13:36:20 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:20.856136 | orchestrator | 2025-06-02 13:36:20 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:20.856166 | orchestrator | 2025-06-02 13:36:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:23.891080 | orchestrator | 2025-06-02 13:36:23 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:23.892609 | orchestrator | 2025-06-02 13:36:23 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:23.894074 | orchestrator | 2025-06-02 13:36:23 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:23.895614 | orchestrator | 2025-06-02 13:36:23 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:23.895646 | orchestrator | 2025-06-02 13:36:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:26.931809 | orchestrator | 2025-06-02 13:36:26 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:26.933365 | orchestrator | 2025-06-02 13:36:26 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:26.934658 | orchestrator | 2025-06-02 13:36:26 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:26.936128 | orchestrator | 2025-06-02 13:36:26 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:26.936186 | orchestrator | 2025-06-02 13:36:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:29.985903 | orchestrator | 2025-06-02 13:36:29 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:29.987931 | orchestrator | 2025-06-02 13:36:29 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:29.989762 | orchestrator | 2025-06-02 13:36:29 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:29.991414 | orchestrator | 2025-06-02 13:36:29 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:29.991730 | orchestrator | 2025-06-02 13:36:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:33.037426 | orchestrator | 2025-06-02 13:36:33 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:33.039063 | orchestrator | 2025-06-02 13:36:33 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:33.041457 | orchestrator | 2025-06-02 13:36:33 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:33.043204 | orchestrator | 2025-06-02 13:36:33 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:33.043235 | orchestrator | 2025-06-02 13:36:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:36.092139 | orchestrator | 2025-06-02 13:36:36 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:36.094871 | orchestrator | 2025-06-02 13:36:36 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:36.097674 | orchestrator | 2025-06-02 13:36:36 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:36.099548 | orchestrator | 2025-06-02 13:36:36 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:36.099599 | orchestrator | 2025-06-02 13:36:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:39.149782 | orchestrator | 2025-06-02 13:36:39 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:39.151451 | orchestrator | 2025-06-02 13:36:39 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:39.153212 | orchestrator | 2025-06-02 13:36:39 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:39.154967 | orchestrator | 2025-06-02 13:36:39 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:39.155012 | orchestrator | 2025-06-02 13:36:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:42.208662 | orchestrator | 2025-06-02 13:36:42 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:42.209525 | orchestrator | 2025-06-02 13:36:42 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:42.209738 | orchestrator | 2025-06-02 13:36:42 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:42.210700 | orchestrator | 2025-06-02 13:36:42 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:42.210726 | orchestrator | 2025-06-02 13:36:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:45.276761 | orchestrator | 2025-06-02 13:36:45 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:45.276994 | orchestrator | 2025-06-02 13:36:45 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:45.277690 | orchestrator | 2025-06-02 13:36:45 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:45.278570 | orchestrator | 2025-06-02 13:36:45 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:45.278773 | orchestrator | 2025-06-02 13:36:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:48.309383 | orchestrator | 2025-06-02 13:36:48 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:48.309470 | orchestrator | 2025-06-02 13:36:48 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:48.309833 | orchestrator | 2025-06-02 13:36:48 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:48.310415 | orchestrator | 2025-06-02 13:36:48 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:48.310468 | orchestrator | 2025-06-02 13:36:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:51.340173 | orchestrator | 2025-06-02 13:36:51 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:51.340279 | orchestrator | 2025-06-02 13:36:51 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:51.340997 | orchestrator | 2025-06-02 13:36:51 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:51.341526 | orchestrator | 2025-06-02 13:36:51 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:51.341629 | orchestrator | 2025-06-02 13:36:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:54.381977 | orchestrator | 2025-06-02 13:36:54 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:54.383428 | orchestrator | 2025-06-02 13:36:54 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:54.385110 | orchestrator | 2025-06-02 13:36:54 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:54.386865 | orchestrator | 2025-06-02 13:36:54 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:54.386892 | orchestrator | 2025-06-02 13:36:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:36:57.429596 | orchestrator | 2025-06-02 13:36:57 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:36:57.430164 | orchestrator | 2025-06-02 13:36:57 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:36:57.433026 | orchestrator | 2025-06-02 13:36:57 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:36:57.435417 | orchestrator | 2025-06-02 13:36:57 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:36:57.435569 | orchestrator | 2025-06-02 13:36:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:00.479607 | orchestrator | 2025-06-02 13:37:00 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:00.480615 | orchestrator | 2025-06-02 13:37:00 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:00.482604 | orchestrator | 2025-06-02 13:37:00 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:37:00.483875 | orchestrator | 2025-06-02 13:37:00 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:00.483899 | orchestrator | 2025-06-02 13:37:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:03.530354 | orchestrator | 2025-06-02 13:37:03 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:03.532571 | orchestrator | 2025-06-02 13:37:03 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:03.534530 | orchestrator | 2025-06-02 13:37:03 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:37:03.536616 | orchestrator | 2025-06-02 13:37:03 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:03.536682 | orchestrator | 2025-06-02 13:37:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:06.584265 | orchestrator | 2025-06-02 13:37:06 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:06.587459 | orchestrator | 2025-06-02 13:37:06 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:06.589667 | orchestrator | 2025-06-02 13:37:06 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state STARTED 2025-06-02 13:37:06.592248 | orchestrator | 2025-06-02 13:37:06 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:06.593370 | orchestrator | 2025-06-02 13:37:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:09.633717 | orchestrator | 2025-06-02 13:37:09 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:09.636654 | orchestrator | 2025-06-02 13:37:09 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:09.642906 | orchestrator | 2025-06-02 13:37:09 | INFO  | Task 1fc3632c-43ac-43f2-8180-17ffc1886291 is in state SUCCESS 2025-06-02 13:37:09.645566 | orchestrator | 2025-06-02 13:37:09.645608 | orchestrator | 2025-06-02 13:37:09.645627 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:37:09.645648 | orchestrator | 2025-06-02 13:37:09.645670 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:37:09.645705 | orchestrator | Monday 02 June 2025 13:35:21 +0000 (0:00:00.173) 0:00:00.173 *********** 2025-06-02 13:37:09.645717 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:09.645729 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:37:09.645740 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:37:09.645751 | orchestrator | 2025-06-02 13:37:09.645762 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:37:09.645797 | orchestrator | Monday 02 June 2025 13:35:21 +0000 (0:00:00.326) 0:00:00.500 *********** 2025-06-02 13:37:09.645808 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 13:37:09.645819 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 13:37:09.645830 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 13:37:09.645841 | orchestrator | 2025-06-02 13:37:09.645851 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-02 13:37:09.645862 | orchestrator | 2025-06-02 13:37:09.645872 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-02 13:37:09.645883 | orchestrator | Monday 02 June 2025 13:35:22 +0000 (0:00:00.734) 0:00:01.234 *********** 2025-06-02 13:37:09.645894 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:37:09.645904 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:37:09.645915 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:09.645925 | orchestrator | 2025-06-02 13:37:09.645936 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:37:09.645948 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:37:09.645961 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:37:09.645973 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:37:09.645983 | orchestrator | 2025-06-02 13:37:09.645994 | orchestrator | 2025-06-02 13:37:09.646005 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:37:09.646063 | orchestrator | Monday 02 June 2025 13:35:23 +0000 (0:00:00.686) 0:00:01.920 *********** 2025-06-02 13:37:09.646078 | orchestrator | =============================================================================== 2025-06-02 13:37:09.646089 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-06-02 13:37:09.646099 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.69s 2025-06-02 13:37:09.646110 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-02 13:37:09.646120 | orchestrator | 2025-06-02 13:37:09.646131 | orchestrator | 2025-06-02 13:37:09.646141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:37:09.646152 | orchestrator | 2025-06-02 13:37:09.646162 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:37:09.646173 | orchestrator | Monday 02 June 2025 13:35:09 +0000 (0:00:00.342) 0:00:00.342 *********** 2025-06-02 13:37:09.646185 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:09.646199 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:37:09.646212 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:37:09.646224 | orchestrator | 2025-06-02 13:37:09.646238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:37:09.646250 | orchestrator | Monday 02 June 2025 13:35:10 +0000 (0:00:00.452) 0:00:00.794 *********** 2025-06-02 13:37:09.646262 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-02 13:37:09.646275 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-02 13:37:09.646287 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-02 13:37:09.646299 | orchestrator | 2025-06-02 13:37:09.646339 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-02 13:37:09.646359 | orchestrator | 2025-06-02 13:37:09.646372 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 13:37:09.646385 | orchestrator | Monday 02 June 2025 13:35:11 +0000 (0:00:00.701) 0:00:01.495 *********** 2025-06-02 13:37:09.646398 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:09.646411 | orchestrator | 2025-06-02 13:37:09.646425 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-02 13:37:09.646446 | orchestrator | Monday 02 June 2025 13:35:11 +0000 (0:00:00.719) 0:00:02.215 *********** 2025-06-02 13:37:09.646459 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-02 13:37:09.646471 | orchestrator | 2025-06-02 13:37:09.646484 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-02 13:37:09.646497 | orchestrator | Monday 02 June 2025 13:35:15 +0000 (0:00:03.704) 0:00:05.919 *********** 2025-06-02 13:37:09.646509 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-02 13:37:09.646522 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-02 13:37:09.646535 | orchestrator | 2025-06-02 13:37:09.646545 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-02 13:37:09.646556 | orchestrator | Monday 02 June 2025 13:35:22 +0000 (0:00:06.663) 0:00:12.583 *********** 2025-06-02 13:37:09.646567 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:37:09.646577 | orchestrator | 2025-06-02 13:37:09.646588 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-02 13:37:09.646599 | orchestrator | Monday 02 June 2025 13:35:25 +0000 (0:00:03.482) 0:00:16.065 *********** 2025-06-02 13:37:09.646624 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:37:09.646635 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-02 13:37:09.646646 | orchestrator | 2025-06-02 13:37:09.646657 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-02 13:37:09.646674 | orchestrator | Monday 02 June 2025 13:35:29 +0000 (0:00:03.937) 0:00:20.003 *********** 2025-06-02 13:37:09.646685 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:37:09.646696 | orchestrator | 2025-06-02 13:37:09.646706 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-02 13:37:09.646717 | orchestrator | Monday 02 June 2025 13:35:32 +0000 (0:00:03.266) 0:00:23.270 *********** 2025-06-02 13:37:09.646727 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-02 13:37:09.646738 | orchestrator | 2025-06-02 13:37:09.646749 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-02 13:37:09.646759 | orchestrator | Monday 02 June 2025 13:35:36 +0000 (0:00:03.987) 0:00:27.257 *********** 2025-06-02 13:37:09.646770 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.646780 | orchestrator | 2025-06-02 13:37:09.646791 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-02 13:37:09.646801 | orchestrator | Monday 02 June 2025 13:35:40 +0000 (0:00:03.270) 0:00:30.528 *********** 2025-06-02 13:37:09.646812 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.646822 | orchestrator | 2025-06-02 13:37:09.646833 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-02 13:37:09.646843 | orchestrator | Monday 02 June 2025 13:35:43 +0000 (0:00:03.628) 0:00:34.156 *********** 2025-06-02 13:37:09.646854 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.646869 | orchestrator | 2025-06-02 13:37:09.646887 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-02 13:37:09.646906 | orchestrator | Monday 02 June 2025 13:35:47 +0000 (0:00:03.863) 0:00:38.020 *********** 2025-06-02 13:37:09.646937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.646978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.646996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.647035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647099 | orchestrator | 2025-06-02 13:37:09.647116 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-02 13:37:09.647135 | orchestrator | Monday 02 June 2025 13:35:48 +0000 (0:00:01.265) 0:00:39.286 *********** 2025-06-02 13:37:09.647153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:09.647171 | orchestrator | 2025-06-02 13:37:09.647188 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-02 13:37:09.647206 | orchestrator | Monday 02 June 2025 13:35:48 +0000 (0:00:00.129) 0:00:39.415 *********** 2025-06-02 13:37:09.647225 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:09.647244 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:09.647261 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:09.647276 | orchestrator | 2025-06-02 13:37:09.647287 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-02 13:37:09.647298 | orchestrator | Monday 02 June 2025 13:35:49 +0000 (0:00:00.492) 0:00:39.908 *********** 2025-06-02 13:37:09.647352 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:37:09.647364 | orchestrator | 2025-06-02 13:37:09.647375 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-02 13:37:09.647385 | orchestrator | Monday 02 June 2025 13:35:50 +0000 (0:00:00.837) 0:00:40.746 *********** 2025-06-02 13:37:09.647397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.647426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.647439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.647459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647494 | orchestrator | 2025-06-02 13:37:09.647505 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-02 13:37:09.647521 | orchestrator | Monday 02 June 2025 13:35:52 +0000 (0:00:02.253) 0:00:42.999 *********** 2025-06-02 13:37:09.647533 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:09.647544 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:37:09.647555 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:37:09.647565 | orchestrator | 2025-06-02 13:37:09.647576 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 13:37:09.647592 | orchestrator | Monday 02 June 2025 13:35:52 +0000 (0:00:00.282) 0:00:43.282 *********** 2025-06-02 13:37:09.647603 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:09.647614 | orchestrator | 2025-06-02 13:37:09.647625 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-02 13:37:09.647635 | orchestrator | Monday 02 June 2025 13:35:53 +0000 (0:00:00.707) 0:00:43.989 *********** 2025-06-02 13:37:09.647647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.647665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.647677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.647689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.647743 | orchestrator | 2025-06-02 13:37:09.647753 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-02 13:37:09.647764 | orchestrator | Monday 02 June 2025 13:35:56 +0000 (0:00:02.630) 0:00:46.620 *********** 2025-06-02 13:37:09.647776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.647788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.647799 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:09.647818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.647834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.647858 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:09.647869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.647881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.647892 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:09.647903 | orchestrator | 2025-06-02 13:37:09.647914 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-02 13:37:09.647924 | orchestrator | Monday 02 June 2025 13:35:57 +0000 (0:00:00.976) 0:00:47.597 *********** 2025-06-02 13:37:09.647936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.647961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.647981 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:09.647992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.648005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.648016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.648027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.648039 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:09.648050 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:09.648060 | orchestrator | 2025-06-02 13:37:09.648071 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-02 13:37:09.648088 | orchestrator | Monday 02 June 2025 13:35:58 +0000 (0:00:01.837) 0:00:49.434 *********** 2025-06-02 13:37:09.648112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648200 | orchestrator | 2025-06-02 13:37:09.648211 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-02 13:37:09.648222 | orchestrator | Monday 02 June 2025 13:36:01 +0000 (0:00:02.357) 0:00:51.792 *********** 2025-06-02 13:37:09.648233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648489 | orchestrator | 2025-06-02 13:37:09.648501 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-02 13:37:09.648512 | orchestrator | Monday 02 June 2025 13:36:06 +0000 (0:00:04.823) 0:00:56.616 *********** 2025-06-02 13:37:09.648524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.648535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.648558 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:09.648590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.648602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.648613 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:09.648624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 13:37:09.648636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:09.648647 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:09.648658 | orchestrator | 2025-06-02 13:37:09.648668 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-02 13:37:09.648685 | orchestrator | Monday 02 June 2025 13:36:06 +0000 (0:00:00.676) 0:00:57.292 *********** 2025-06-02 13:37:09.648703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 13:37:09.648742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:09.648789 | orchestrator | 2025-06-02 13:37:09.648800 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 13:37:09.648816 | orchestrator | Monday 02 June 2025 13:36:08 +0000 (0:00:01.945) 0:00:59.238 *********** 2025-06-02 13:37:09.648827 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:09.648838 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:09.648848 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:09.648859 | orchestrator | 2025-06-02 13:37:09.648870 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-02 13:37:09.648881 | orchestrator | Monday 02 June 2025 13:36:09 +0000 (0:00:00.308) 0:00:59.546 *********** 2025-06-02 13:37:09.648891 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.648902 | orchestrator | 2025-06-02 13:37:09.648913 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-02 13:37:09.648923 | orchestrator | Monday 02 June 2025 13:36:11 +0000 (0:00:02.137) 0:01:01.684 *********** 2025-06-02 13:37:09.648934 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.648945 | orchestrator | 2025-06-02 13:37:09.648955 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-02 13:37:09.648966 | orchestrator | Monday 02 June 2025 13:36:13 +0000 (0:00:02.163) 0:01:03.848 *********** 2025-06-02 13:37:09.648977 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.648988 | orchestrator | 2025-06-02 13:37:09.648999 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 13:37:09.649009 | orchestrator | Monday 02 June 2025 13:36:35 +0000 (0:00:22.283) 0:01:26.132 *********** 2025-06-02 13:37:09.649020 | orchestrator | 2025-06-02 13:37:09.649031 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 13:37:09.649042 | orchestrator | Monday 02 June 2025 13:36:35 +0000 (0:00:00.066) 0:01:26.199 *********** 2025-06-02 13:37:09.649052 | orchestrator | 2025-06-02 13:37:09.649063 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 13:37:09.649074 | orchestrator | Monday 02 June 2025 13:36:35 +0000 (0:00:00.061) 0:01:26.260 *********** 2025-06-02 13:37:09.649084 | orchestrator | 2025-06-02 13:37:09.649095 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-02 13:37:09.649106 | orchestrator | Monday 02 June 2025 13:36:35 +0000 (0:00:00.062) 0:01:26.322 *********** 2025-06-02 13:37:09.649116 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.649127 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:09.649138 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:09.649149 | orchestrator | 2025-06-02 13:37:09.649159 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-02 13:37:09.649170 | orchestrator | Monday 02 June 2025 13:36:56 +0000 (0:00:20.999) 0:01:47.322 *********** 2025-06-02 13:37:09.649191 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:09.649202 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:09.649213 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:09.649223 | orchestrator | 2025-06-02 13:37:09.649234 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:37:09.649245 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 13:37:09.649258 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:37:09.649269 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:37:09.649280 | orchestrator | 2025-06-02 13:37:09.649290 | orchestrator | 2025-06-02 13:37:09.649301 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:37:09.649338 | orchestrator | Monday 02 June 2025 13:37:07 +0000 (0:00:10.209) 0:01:57.532 *********** 2025-06-02 13:37:09.649350 | orchestrator | =============================================================================== 2025-06-02 13:37:09.649360 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 22.28s 2025-06-02 13:37:09.649371 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.00s 2025-06-02 13:37:09.649381 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.21s 2025-06-02 13:37:09.649392 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.66s 2025-06-02 13:37:09.649403 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.82s 2025-06-02 13:37:09.649413 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.99s 2025-06-02 13:37:09.649424 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.94s 2025-06-02 13:37:09.649436 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.86s 2025-06-02 13:37:09.649454 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.70s 2025-06-02 13:37:09.649473 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.63s 2025-06-02 13:37:09.649492 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.48s 2025-06-02 13:37:09.649510 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.27s 2025-06-02 13:37:09.649527 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.27s 2025-06-02 13:37:09.649539 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.63s 2025-06-02 13:37:09.649549 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.36s 2025-06-02 13:37:09.649560 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.25s 2025-06-02 13:37:09.649578 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.16s 2025-06-02 13:37:09.649589 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.14s 2025-06-02 13:37:09.649606 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.95s 2025-06-02 13:37:09.649617 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.84s 2025-06-02 13:37:09.649628 | orchestrator | 2025-06-02 13:37:09 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:09.649639 | orchestrator | 2025-06-02 13:37:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:12.686281 | orchestrator | 2025-06-02 13:37:12 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:12.688459 | orchestrator | 2025-06-02 13:37:12 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:12.689524 | orchestrator | 2025-06-02 13:37:12 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:12.689550 | orchestrator | 2025-06-02 13:37:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:15.730550 | orchestrator | 2025-06-02 13:37:15 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:15.732602 | orchestrator | 2025-06-02 13:37:15 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:15.734762 | orchestrator | 2025-06-02 13:37:15 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:15.734806 | orchestrator | 2025-06-02 13:37:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:18.785415 | orchestrator | 2025-06-02 13:37:18 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:18.786264 | orchestrator | 2025-06-02 13:37:18 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:18.788126 | orchestrator | 2025-06-02 13:37:18 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:18.788176 | orchestrator | 2025-06-02 13:37:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:21.842182 | orchestrator | 2025-06-02 13:37:21 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state STARTED 2025-06-02 13:37:21.842587 | orchestrator | 2025-06-02 13:37:21 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:21.843760 | orchestrator | 2025-06-02 13:37:21 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:21.843794 | orchestrator | 2025-06-02 13:37:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:24.891047 | orchestrator | 2025-06-02 13:37:24 | INFO  | Task cfc00f82-29fd-4b7c-a7de-94981618e131 is in state SUCCESS 2025-06-02 13:37:24.893324 | orchestrator | 2025-06-02 13:37:24.893382 | orchestrator | 2025-06-02 13:37:24.893396 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:37:24.893409 | orchestrator | 2025-06-02 13:37:24.893421 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-02 13:37:24.893445 | orchestrator | Monday 02 June 2025 13:28:34 +0000 (0:00:00.253) 0:00:00.253 *********** 2025-06-02 13:37:24.893458 | orchestrator | changed: [testbed-manager] 2025-06-02 13:37:24.893470 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.893481 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.893492 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.893502 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.893645 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.894282 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.894334 | orchestrator | 2025-06-02 13:37:24.894347 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:37:24.894359 | orchestrator | Monday 02 June 2025 13:28:34 +0000 (0:00:00.713) 0:00:00.966 *********** 2025-06-02 13:37:24.894370 | orchestrator | changed: [testbed-manager] 2025-06-02 13:37:24.894381 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.894392 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.894403 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.894413 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.894437 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.894448 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.894459 | orchestrator | 2025-06-02 13:37:24.894470 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:37:24.894481 | orchestrator | Monday 02 June 2025 13:28:35 +0000 (0:00:00.662) 0:00:01.629 *********** 2025-06-02 13:37:24.897098 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-02 13:37:24.897149 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 13:37:24.897252 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 13:37:24.897278 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 13:37:24.897350 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-02 13:37:24.897370 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-02 13:37:24.897388 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-02 13:37:24.897406 | orchestrator | 2025-06-02 13:37:24.897423 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-02 13:37:24.897440 | orchestrator | 2025-06-02 13:37:24.897555 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 13:37:24.897577 | orchestrator | Monday 02 June 2025 13:28:36 +0000 (0:00:00.736) 0:00:02.366 *********** 2025-06-02 13:37:24.897612 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:24.897623 | orchestrator | 2025-06-02 13:37:24.897633 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-02 13:37:24.897643 | orchestrator | Monday 02 June 2025 13:28:36 +0000 (0:00:00.563) 0:00:02.929 *********** 2025-06-02 13:37:24.897653 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-02 13:37:24.897665 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-02 13:37:24.897677 | orchestrator | 2025-06-02 13:37:24.897688 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-02 13:37:24.897699 | orchestrator | Monday 02 June 2025 13:28:40 +0000 (0:00:03.470) 0:00:06.399 *********** 2025-06-02 13:37:24.897710 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:37:24.897721 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 13:37:24.897732 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.897743 | orchestrator | 2025-06-02 13:37:24.897754 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 13:37:24.897765 | orchestrator | Monday 02 June 2025 13:28:43 +0000 (0:00:03.414) 0:00:09.814 *********** 2025-06-02 13:37:24.897776 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.897787 | orchestrator | 2025-06-02 13:37:24.897798 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-02 13:37:24.897808 | orchestrator | Monday 02 June 2025 13:28:44 +0000 (0:00:00.777) 0:00:10.592 *********** 2025-06-02 13:37:24.897819 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.897830 | orchestrator | 2025-06-02 13:37:24.897841 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-02 13:37:24.897852 | orchestrator | Monday 02 June 2025 13:28:46 +0000 (0:00:01.505) 0:00:12.098 *********** 2025-06-02 13:37:24.897862 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.897874 | orchestrator | 2025-06-02 13:37:24.897884 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 13:37:24.897895 | orchestrator | Monday 02 June 2025 13:28:49 +0000 (0:00:03.224) 0:00:15.322 *********** 2025-06-02 13:37:24.897906 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.897917 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.897928 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.897939 | orchestrator | 2025-06-02 13:37:24.897950 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 13:37:24.897961 | orchestrator | Monday 02 June 2025 13:28:49 +0000 (0:00:00.453) 0:00:15.776 *********** 2025-06-02 13:37:24.897973 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.897983 | orchestrator | 2025-06-02 13:37:24.897994 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-02 13:37:24.898005 | orchestrator | Monday 02 June 2025 13:29:21 +0000 (0:00:31.500) 0:00:47.276 *********** 2025-06-02 13:37:24.898058 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.898071 | orchestrator | 2025-06-02 13:37:24.898084 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 13:37:24.898145 | orchestrator | Monday 02 June 2025 13:29:32 +0000 (0:00:11.650) 0:00:58.927 *********** 2025-06-02 13:37:24.898167 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.898184 | orchestrator | 2025-06-02 13:37:24.898201 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 13:37:24.898218 | orchestrator | Monday 02 June 2025 13:29:43 +0000 (0:00:10.814) 0:01:09.741 *********** 2025-06-02 13:37:24.898256 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.898274 | orchestrator | 2025-06-02 13:37:24.898292 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-02 13:37:24.898332 | orchestrator | Monday 02 June 2025 13:29:46 +0000 (0:00:02.664) 0:01:12.406 *********** 2025-06-02 13:37:24.898348 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.898365 | orchestrator | 2025-06-02 13:37:24.898381 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 13:37:24.898397 | orchestrator | Monday 02 June 2025 13:29:46 +0000 (0:00:00.439) 0:01:12.845 *********** 2025-06-02 13:37:24.898416 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:24.898432 | orchestrator | 2025-06-02 13:37:24.898449 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 13:37:24.898466 | orchestrator | Monday 02 June 2025 13:29:47 +0000 (0:00:00.395) 0:01:13.240 *********** 2025-06-02 13:37:24.898483 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.898499 | orchestrator | 2025-06-02 13:37:24.898517 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 13:37:24.898533 | orchestrator | Monday 02 June 2025 13:30:04 +0000 (0:00:17.726) 0:01:30.967 *********** 2025-06-02 13:37:24.898549 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.898565 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.898581 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.898597 | orchestrator | 2025-06-02 13:37:24.898614 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-02 13:37:24.898628 | orchestrator | 2025-06-02 13:37:24.898643 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 13:37:24.898658 | orchestrator | Monday 02 June 2025 13:30:05 +0000 (0:00:00.329) 0:01:31.296 *********** 2025-06-02 13:37:24.898668 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:24.898677 | orchestrator | 2025-06-02 13:37:24.898687 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-02 13:37:24.898696 | orchestrator | Monday 02 June 2025 13:30:05 +0000 (0:00:00.608) 0:01:31.905 *********** 2025-06-02 13:37:24.898706 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.898715 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.898725 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.898734 | orchestrator | 2025-06-02 13:37:24.898743 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-02 13:37:24.898753 | orchestrator | Monday 02 June 2025 13:30:07 +0000 (0:00:01.940) 0:01:33.845 *********** 2025-06-02 13:37:24.898762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.898772 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.898781 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.898791 | orchestrator | 2025-06-02 13:37:24.898809 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 13:37:24.898819 | orchestrator | Monday 02 June 2025 13:30:09 +0000 (0:00:01.997) 0:01:35.843 *********** 2025-06-02 13:37:24.898828 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.898838 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.898847 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.898856 | orchestrator | 2025-06-02 13:37:24.898866 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 13:37:24.898875 | orchestrator | Monday 02 June 2025 13:30:10 +0000 (0:00:00.357) 0:01:36.200 *********** 2025-06-02 13:37:24.898885 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 13:37:24.898904 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.898913 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 13:37:24.898923 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.898932 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 13:37:24.898942 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-02 13:37:24.898951 | orchestrator | 2025-06-02 13:37:24.898961 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 13:37:24.898970 | orchestrator | Monday 02 June 2025 13:30:19 +0000 (0:00:08.856) 0:01:45.057 *********** 2025-06-02 13:37:24.898979 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.898990 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899005 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899021 | orchestrator | 2025-06-02 13:37:24.899037 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 13:37:24.899052 | orchestrator | Monday 02 June 2025 13:30:19 +0000 (0:00:00.633) 0:01:45.691 *********** 2025-06-02 13:37:24.899068 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 13:37:24.899111 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899129 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 13:37:24.899146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.899164 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 13:37:24.899182 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899199 | orchestrator | 2025-06-02 13:37:24.899211 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 13:37:24.899221 | orchestrator | Monday 02 June 2025 13:30:20 +0000 (0:00:01.114) 0:01:46.805 *********** 2025-06-02 13:37:24.899233 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899243 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.899254 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899264 | orchestrator | 2025-06-02 13:37:24.899276 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-02 13:37:24.899286 | orchestrator | Monday 02 June 2025 13:30:21 +0000 (0:00:00.823) 0:01:47.629 *********** 2025-06-02 13:37:24.899325 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899342 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899353 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.899364 | orchestrator | 2025-06-02 13:37:24.899376 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-02 13:37:24.899387 | orchestrator | Monday 02 June 2025 13:30:22 +0000 (0:00:00.989) 0:01:48.618 *********** 2025-06-02 13:37:24.899398 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899409 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899441 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.899452 | orchestrator | 2025-06-02 13:37:24.899463 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-02 13:37:24.899474 | orchestrator | Monday 02 June 2025 13:30:24 +0000 (0:00:02.044) 0:01:50.663 *********** 2025-06-02 13:37:24.899485 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899496 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899507 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.899518 | orchestrator | 2025-06-02 13:37:24.899528 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 13:37:24.899537 | orchestrator | Monday 02 June 2025 13:30:47 +0000 (0:00:23.083) 0:02:13.746 *********** 2025-06-02 13:37:24.899546 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899556 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899565 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.899575 | orchestrator | 2025-06-02 13:37:24.899584 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 13:37:24.899594 | orchestrator | Monday 02 June 2025 13:30:58 +0000 (0:00:10.845) 0:02:24.592 *********** 2025-06-02 13:37:24.899603 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.899622 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899632 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899642 | orchestrator | 2025-06-02 13:37:24.899651 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-02 13:37:24.899660 | orchestrator | Monday 02 June 2025 13:30:59 +0000 (0:00:01.409) 0:02:26.002 *********** 2025-06-02 13:37:24.899670 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899680 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899689 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.899698 | orchestrator | 2025-06-02 13:37:24.899708 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-02 13:37:24.899717 | orchestrator | Monday 02 June 2025 13:31:11 +0000 (0:00:11.239) 0:02:37.242 *********** 2025-06-02 13:37:24.899727 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.899736 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899745 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899755 | orchestrator | 2025-06-02 13:37:24.899764 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 13:37:24.899774 | orchestrator | Monday 02 June 2025 13:31:12 +0000 (0:00:01.604) 0:02:38.847 *********** 2025-06-02 13:37:24.899783 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.899792 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.899802 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.899811 | orchestrator | 2025-06-02 13:37:24.899821 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-02 13:37:24.899830 | orchestrator | 2025-06-02 13:37:24.899839 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 13:37:24.899856 | orchestrator | Monday 02 June 2025 13:31:13 +0000 (0:00:00.346) 0:02:39.193 *********** 2025-06-02 13:37:24.899865 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:24.899876 | orchestrator | 2025-06-02 13:37:24.899885 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-02 13:37:24.899895 | orchestrator | Monday 02 June 2025 13:31:13 +0000 (0:00:00.520) 0:02:39.714 *********** 2025-06-02 13:37:24.899904 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-02 13:37:24.899914 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-02 13:37:24.899923 | orchestrator | 2025-06-02 13:37:24.899933 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-02 13:37:24.899942 | orchestrator | Monday 02 June 2025 13:31:16 +0000 (0:00:03.031) 0:02:42.746 *********** 2025-06-02 13:37:24.899952 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-02 13:37:24.899963 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-02 13:37:24.899972 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-02 13:37:24.899982 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-02 13:37:24.899991 | orchestrator | 2025-06-02 13:37:24.900000 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-02 13:37:24.900010 | orchestrator | Monday 02 June 2025 13:31:22 +0000 (0:00:06.142) 0:02:48.889 *********** 2025-06-02 13:37:24.900019 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:37:24.900029 | orchestrator | 2025-06-02 13:37:24.900038 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-02 13:37:24.900047 | orchestrator | Monday 02 June 2025 13:31:26 +0000 (0:00:03.235) 0:02:52.124 *********** 2025-06-02 13:37:24.900057 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:37:24.900066 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-02 13:37:24.900082 | orchestrator | 2025-06-02 13:37:24.900092 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-02 13:37:24.900101 | orchestrator | Monday 02 June 2025 13:31:29 +0000 (0:00:03.665) 0:02:55.790 *********** 2025-06-02 13:37:24.900112 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:37:24.900128 | orchestrator | 2025-06-02 13:37:24.900145 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-02 13:37:24.900163 | orchestrator | Monday 02 June 2025 13:31:32 +0000 (0:00:03.126) 0:02:58.916 *********** 2025-06-02 13:37:24.900181 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-02 13:37:24.900197 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-02 13:37:24.900212 | orchestrator | 2025-06-02 13:37:24.900222 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 13:37:24.900240 | orchestrator | Monday 02 June 2025 13:31:40 +0000 (0:00:07.151) 0:03:06.068 *********** 2025-06-02 13:37:24.900255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.900276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.900288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.900338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.900353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.900363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.900373 | orchestrator | 2025-06-02 13:37:24.900382 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-02 13:37:24.900392 | orchestrator | Monday 02 June 2025 13:31:41 +0000 (0:00:01.868) 0:03:07.937 *********** 2025-06-02 13:37:24.900402 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.900411 | orchestrator | 2025-06-02 13:37:24.900421 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-02 13:37:24.900435 | orchestrator | Monday 02 June 2025 13:31:42 +0000 (0:00:00.143) 0:03:08.080 *********** 2025-06-02 13:37:24.900445 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.900454 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.900464 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.900473 | orchestrator | 2025-06-02 13:37:24.900483 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-02 13:37:24.900492 | orchestrator | Monday 02 June 2025 13:31:42 +0000 (0:00:00.528) 0:03:08.609 *********** 2025-06-02 13:37:24.900502 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:37:24.900511 | orchestrator | 2025-06-02 13:37:24.900521 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-02 13:37:24.900530 | orchestrator | Monday 02 June 2025 13:31:43 +0000 (0:00:00.791) 0:03:09.400 *********** 2025-06-02 13:37:24.900540 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.900549 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.900565 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.900574 | orchestrator | 2025-06-02 13:37:24.900584 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 13:37:24.900593 | orchestrator | Monday 02 June 2025 13:31:43 +0000 (0:00:00.488) 0:03:09.888 *********** 2025-06-02 13:37:24.900603 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:24.900612 | orchestrator | 2025-06-02 13:37:24.900622 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 13:37:24.900631 | orchestrator | Monday 02 June 2025 13:31:45 +0000 (0:00:01.294) 0:03:11.182 *********** 2025-06-02 13:37:24.900648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.900660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.900676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.900693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.900704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.900722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.900733 | orchestrator | 2025-06-02 13:37:24.900742 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 13:37:24.900752 | orchestrator | Monday 02 June 2025 13:31:47 +0000 (0:00:02.683) 0:03:13.866 *********** 2025-06-02 13:37:24.900762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.900783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.900799 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.900810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.900826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.900836 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.900847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.900862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.900879 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.900889 | orchestrator | 2025-06-02 13:37:24.900898 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 13:37:24.900908 | orchestrator | Monday 02 June 2025 13:31:48 +0000 (0:00:00.925) 0:03:14.792 *********** 2025-06-02 13:37:24.900918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.900929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.900939 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.900956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.900968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.900983 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.900998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.901008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.901018 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.901028 | orchestrator | 2025-06-02 13:37:24.901037 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-02 13:37:24.901047 | orchestrator | Monday 02 June 2025 13:31:50 +0000 (0:00:01.444) 0:03:16.236 *********** 2025-06-02 13:37:24.901064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901156 | orchestrator | 2025-06-02 13:37:24.901173 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-02 13:37:24.901190 | orchestrator | Monday 02 June 2025 13:31:52 +0000 (0:00:02.738) 0:03:18.975 *********** 2025-06-02 13:37:24.901216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901347 | orchestrator | 2025-06-02 13:37:24.901357 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-02 13:37:24.901367 | orchestrator | Monday 02 June 2025 13:32:01 +0000 (0:00:08.774) 0:03:27.750 *********** 2025-06-02 13:37:24.901384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.901395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.901413 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.901427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.901438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.901448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.901458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 13:37:24.901476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.901487 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.901503 | orchestrator | 2025-06-02 13:37:24.901513 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-02 13:37:24.901522 | orchestrator | Monday 02 June 2025 13:32:03 +0000 (0:00:01.406) 0:03:29.156 *********** 2025-06-02 13:37:24.901532 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.901541 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.901551 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.901560 | orchestrator | 2025-06-02 13:37:24.901570 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-02 13:37:24.901579 | orchestrator | Monday 02 June 2025 13:32:06 +0000 (0:00:03.036) 0:03:32.192 *********** 2025-06-02 13:37:24.901589 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.901598 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.901608 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.901617 | orchestrator | 2025-06-02 13:37:24.901627 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-02 13:37:24.901637 | orchestrator | Monday 02 June 2025 13:32:06 +0000 (0:00:00.262) 0:03:32.455 *********** 2025-06-02 13:37:24.901655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 13:37:24.901704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.901739 | orchestrator | 2025-06-02 13:37:24.901749 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 13:37:24.901758 | orchestrator | Monday 02 June 2025 13:32:08 +0000 (0:00:01.867) 0:03:34.323 *********** 2025-06-02 13:37:24.901768 | orchestrator | 2025-06-02 13:37:24.901778 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 13:37:24.901787 | orchestrator | Monday 02 June 2025 13:32:08 +0000 (0:00:00.258) 0:03:34.581 *********** 2025-06-02 13:37:24.901796 | orchestrator | 2025-06-02 13:37:24.901806 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 13:37:24.901815 | orchestrator | Monday 02 June 2025 13:32:08 +0000 (0:00:00.346) 0:03:34.927 *********** 2025-06-02 13:37:24.901825 | orchestrator | 2025-06-02 13:37:24.901834 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-02 13:37:24.901844 | orchestrator | Monday 02 June 2025 13:32:09 +0000 (0:00:00.611) 0:03:35.539 *********** 2025-06-02 13:37:24.901854 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.901863 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.901873 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.901882 | orchestrator | 2025-06-02 13:37:24.901891 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-02 13:37:24.901909 | orchestrator | Monday 02 June 2025 13:32:29 +0000 (0:00:20.222) 0:03:55.761 *********** 2025-06-02 13:37:24.901919 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.901929 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.901938 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.901947 | orchestrator | 2025-06-02 13:37:24.901957 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-02 13:37:24.901966 | orchestrator | 2025-06-02 13:37:24.901976 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 13:37:24.901986 | orchestrator | Monday 02 June 2025 13:32:41 +0000 (0:00:11.279) 0:04:07.040 *********** 2025-06-02 13:37:24.901996 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:24.902006 | orchestrator | 2025-06-02 13:37:24.902051 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 13:37:24.902064 | orchestrator | Monday 02 June 2025 13:32:43 +0000 (0:00:02.206) 0:04:09.247 *********** 2025-06-02 13:37:24.902073 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.902083 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.902092 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.902101 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.902111 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.902120 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.902129 | orchestrator | 2025-06-02 13:37:24.902139 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-02 13:37:24.902148 | orchestrator | Monday 02 June 2025 13:32:44 +0000 (0:00:00.866) 0:04:10.113 *********** 2025-06-02 13:37:24.902158 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.902167 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.902177 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.902186 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:37:24.902198 | orchestrator | 2025-06-02 13:37:24.902215 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 13:37:24.902232 | orchestrator | Monday 02 June 2025 13:32:45 +0000 (0:00:01.015) 0:04:11.129 *********** 2025-06-02 13:37:24.902250 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-02 13:37:24.902268 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-02 13:37:24.902284 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-02 13:37:24.902323 | orchestrator | 2025-06-02 13:37:24.902334 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 13:37:24.902344 | orchestrator | Monday 02 June 2025 13:32:46 +0000 (0:00:01.061) 0:04:12.190 *********** 2025-06-02 13:37:24.902353 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-02 13:37:24.902363 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-02 13:37:24.902372 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-02 13:37:24.902381 | orchestrator | 2025-06-02 13:37:24.902391 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 13:37:24.902401 | orchestrator | Monday 02 June 2025 13:32:47 +0000 (0:00:01.491) 0:04:13.682 *********** 2025-06-02 13:37:24.902410 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-02 13:37:24.902420 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.902429 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-02 13:37:24.902438 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.902448 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-02 13:37:24.902457 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.902467 | orchestrator | 2025-06-02 13:37:24.902482 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-02 13:37:24.902492 | orchestrator | Monday 02 June 2025 13:32:48 +0000 (0:00:00.568) 0:04:14.251 *********** 2025-06-02 13:37:24.902509 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:37:24.902518 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:37:24.902528 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.902537 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:37:24.902546 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:37:24.902556 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.902565 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 13:37:24.902575 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 13:37:24.902584 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.902594 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 13:37:24.902604 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 13:37:24.902613 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 13:37:24.902623 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 13:37:24.902632 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 13:37:24.902642 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 13:37:24.902651 | orchestrator | 2025-06-02 13:37:24.902660 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-02 13:37:24.902670 | orchestrator | Monday 02 June 2025 13:32:50 +0000 (0:00:02.109) 0:04:16.360 *********** 2025-06-02 13:37:24.902679 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.902689 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.902698 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.902708 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.902717 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.902726 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.902736 | orchestrator | 2025-06-02 13:37:24.902745 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-02 13:37:24.902755 | orchestrator | Monday 02 June 2025 13:32:51 +0000 (0:00:01.600) 0:04:17.961 *********** 2025-06-02 13:37:24.902765 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.902774 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.902783 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.902793 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.902802 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.902811 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.902821 | orchestrator | 2025-06-02 13:37:24.902830 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 13:37:24.902840 | orchestrator | Monday 02 June 2025 13:32:55 +0000 (0:00:03.624) 0:04:21.585 *********** 2025-06-02 13:37:24.902858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.902871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.902891 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.902902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.902913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903116 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903288 | orchestrator | 2025-06-02 13:37:24.903329 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 13:37:24.903348 | orchestrator | Monday 02 June 2025 13:32:59 +0000 (0:00:03.978) 0:04:25.564 *********** 2025-06-02 13:37:24.903366 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:24.903382 | orchestrator | 2025-06-02 13:37:24.903399 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 13:37:24.903410 | orchestrator | Monday 02 June 2025 13:33:01 +0000 (0:00:01.731) 0:04:27.296 *********** 2025-06-02 13:37:24.903420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903731 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.903839 | orchestrator | 2025-06-02 13:37:24.903851 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 13:37:24.903864 | orchestrator | Monday 02 June 2025 13:33:06 +0000 (0:00:05.212) 0:04:32.508 *********** 2025-06-02 13:37:24.903882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.903895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.903909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.903922 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.903971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.903992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.904006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904018 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.904037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.904049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.904061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904078 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.904119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.904132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904143 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.904154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.904171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904183 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.904194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.904206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904223 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.904234 | orchestrator | 2025-06-02 13:37:24.904245 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 13:37:24.904256 | orchestrator | Monday 02 June 2025 13:33:09 +0000 (0:00:02.871) 0:04:35.380 *********** 2025-06-02 13:37:24.904328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.904352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.904381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904402 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.904421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.904442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.904491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904505 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.904516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.904527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.904544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904555 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.904566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.904585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904596 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.904636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.904650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904662 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.904673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.904689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.904700 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.904711 | orchestrator | 2025-06-02 13:37:24.904722 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 13:37:24.904733 | orchestrator | Monday 02 June 2025 13:33:12 +0000 (0:00:03.469) 0:04:38.849 *********** 2025-06-02 13:37:24.904751 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.904762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.904772 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.904783 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 13:37:24.904794 | orchestrator | 2025-06-02 13:37:24.904805 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-02 13:37:24.904815 | orchestrator | Monday 02 June 2025 13:33:14 +0000 (0:00:01.841) 0:04:40.691 *********** 2025-06-02 13:37:24.904826 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:37:24.904837 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 13:37:24.904848 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 13:37:24.904858 | orchestrator | 2025-06-02 13:37:24.904869 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-02 13:37:24.904879 | orchestrator | Monday 02 June 2025 13:33:17 +0000 (0:00:02.951) 0:04:43.643 *********** 2025-06-02 13:37:24.904890 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:37:24.904901 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 13:37:24.904911 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 13:37:24.904922 | orchestrator | 2025-06-02 13:37:24.904932 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-02 13:37:24.904943 | orchestrator | Monday 02 June 2025 13:33:19 +0000 (0:00:01.630) 0:04:45.273 *********** 2025-06-02 13:37:24.904954 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:37:24.904965 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:37:24.904975 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:37:24.904986 | orchestrator | 2025-06-02 13:37:24.904997 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-02 13:37:24.905007 | orchestrator | Monday 02 June 2025 13:33:20 +0000 (0:00:01.049) 0:04:46.323 *********** 2025-06-02 13:37:24.905018 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:37:24.905029 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:37:24.905039 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:37:24.905050 | orchestrator | 2025-06-02 13:37:24.905060 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-02 13:37:24.905071 | orchestrator | Monday 02 June 2025 13:33:21 +0000 (0:00:01.080) 0:04:47.403 *********** 2025-06-02 13:37:24.905082 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 13:37:24.905124 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 13:37:24.905137 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 13:37:24.905148 | orchestrator | 2025-06-02 13:37:24.905158 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-02 13:37:24.905169 | orchestrator | Monday 02 June 2025 13:33:22 +0000 (0:00:01.270) 0:04:48.673 *********** 2025-06-02 13:37:24.905180 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 13:37:24.905191 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 13:37:24.905201 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 13:37:24.905212 | orchestrator | 2025-06-02 13:37:24.905223 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-02 13:37:24.905233 | orchestrator | Monday 02 June 2025 13:33:24 +0000 (0:00:01.963) 0:04:50.637 *********** 2025-06-02 13:37:24.905244 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 13:37:24.905255 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 13:37:24.905266 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 13:37:24.905276 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-02 13:37:24.905287 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-02 13:37:24.905325 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-02 13:37:24.905347 | orchestrator | 2025-06-02 13:37:24.905368 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-02 13:37:24.905399 | orchestrator | Monday 02 June 2025 13:33:28 +0000 (0:00:03.757) 0:04:54.395 *********** 2025-06-02 13:37:24.905421 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.905442 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.905463 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.905476 | orchestrator | 2025-06-02 13:37:24.905487 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-02 13:37:24.905498 | orchestrator | Monday 02 June 2025 13:33:28 +0000 (0:00:00.255) 0:04:54.651 *********** 2025-06-02 13:37:24.905509 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.905519 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.905530 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.905540 | orchestrator | 2025-06-02 13:37:24.905551 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-02 13:37:24.905561 | orchestrator | Monday 02 June 2025 13:33:28 +0000 (0:00:00.266) 0:04:54.917 *********** 2025-06-02 13:37:24.905572 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.905582 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.905593 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.905603 | orchestrator | 2025-06-02 13:37:24.905624 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-02 13:37:24.905635 | orchestrator | Monday 02 June 2025 13:33:30 +0000 (0:00:01.791) 0:04:56.709 *********** 2025-06-02 13:37:24.905646 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 13:37:24.905657 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 13:37:24.905668 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 13:37:24.905679 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 13:37:24.905689 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 13:37:24.905700 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 13:37:24.905710 | orchestrator | 2025-06-02 13:37:24.905721 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-02 13:37:24.905732 | orchestrator | Monday 02 June 2025 13:33:33 +0000 (0:00:03.029) 0:04:59.739 *********** 2025-06-02 13:37:24.905742 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 13:37:24.905753 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 13:37:24.905764 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 13:37:24.905774 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 13:37:24.905785 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.905795 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 13:37:24.905806 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.905816 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 13:37:24.905827 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.905838 | orchestrator | 2025-06-02 13:37:24.905848 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-02 13:37:24.905919 | orchestrator | Monday 02 June 2025 13:33:36 +0000 (0:00:03.129) 0:05:02.868 *********** 2025-06-02 13:37:24.905932 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.905943 | orchestrator | 2025-06-02 13:37:24.905953 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-02 13:37:24.905964 | orchestrator | Monday 02 June 2025 13:33:36 +0000 (0:00:00.127) 0:05:02.995 *********** 2025-06-02 13:37:24.905975 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.905994 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.906005 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.906046 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.906059 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.906070 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.906080 | orchestrator | 2025-06-02 13:37:24.906091 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-02 13:37:24.906149 | orchestrator | Monday 02 June 2025 13:33:37 +0000 (0:00:00.839) 0:05:03.835 *********** 2025-06-02 13:37:24.906162 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 13:37:24.906173 | orchestrator | 2025-06-02 13:37:24.906184 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-02 13:37:24.906195 | orchestrator | Monday 02 June 2025 13:33:38 +0000 (0:00:00.692) 0:05:04.528 *********** 2025-06-02 13:37:24.906205 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.906216 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.906226 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.906237 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.906248 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.906258 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.906269 | orchestrator | 2025-06-02 13:37:24.906279 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-02 13:37:24.906290 | orchestrator | Monday 02 June 2025 13:33:39 +0000 (0:00:00.622) 0:05:05.150 *********** 2025-06-02 13:37:24.906327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906606 | orchestrator | 2025-06-02 13:37:24.906617 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-02 13:37:24.906627 | orchestrator | Monday 02 June 2025 13:33:43 +0000 (0:00:04.324) 0:05:09.475 *********** 2025-06-02 13:37:24.906639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.906656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.906668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.906684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.906696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.906714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.906732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.906852 | orchestrator | 2025-06-02 13:37:24.906863 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-02 13:37:24.906874 | orchestrator | Monday 02 June 2025 13:33:49 +0000 (0:00:06.427) 0:05:15.903 *********** 2025-06-02 13:37:24.906884 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.906895 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.906906 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.906917 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.906927 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.906938 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.906948 | orchestrator | 2025-06-02 13:37:24.906964 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-02 13:37:24.906975 | orchestrator | Monday 02 June 2025 13:33:51 +0000 (0:00:01.895) 0:05:17.798 *********** 2025-06-02 13:37:24.906993 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 13:37:24.907003 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 13:37:24.907014 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 13:37:24.907025 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 13:37:24.907035 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 13:37:24.907046 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 13:37:24.907056 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 13:37:24.907067 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.907078 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 13:37:24.907088 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.907099 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 13:37:24.907110 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.907121 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 13:37:24.907132 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 13:37:24.907142 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 13:37:24.907153 | orchestrator | 2025-06-02 13:37:24.907164 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-02 13:37:24.907174 | orchestrator | Monday 02 June 2025 13:33:55 +0000 (0:00:03.759) 0:05:21.558 *********** 2025-06-02 13:37:24.907185 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.907195 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.907206 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.907217 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.907227 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.907238 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.907248 | orchestrator | 2025-06-02 13:37:24.907259 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-02 13:37:24.907270 | orchestrator | Monday 02 June 2025 13:33:56 +0000 (0:00:00.637) 0:05:22.195 *********** 2025-06-02 13:37:24.907280 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 13:37:24.907292 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 13:37:24.907338 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 13:37:24.907358 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 13:37:24.907377 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 13:37:24.907388 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 13:37:24.907399 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 13:37:24.907410 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 13:37:24.907420 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 13:37:24.907431 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 13:37:24.907453 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.907472 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 13:37:24.907492 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.907511 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 13:37:24.907529 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.907545 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 13:37:24.907556 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 13:37:24.907567 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 13:37:24.907578 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 13:37:24.907588 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 13:37:24.907607 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 13:37:24.907619 | orchestrator | 2025-06-02 13:37:24.907629 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-02 13:37:24.907640 | orchestrator | Monday 02 June 2025 13:34:01 +0000 (0:00:05.195) 0:05:27.391 *********** 2025-06-02 13:37:24.907650 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 13:37:24.907661 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 13:37:24.907672 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 13:37:24.907682 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 13:37:24.907692 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 13:37:24.907703 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 13:37:24.907713 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 13:37:24.907724 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 13:37:24.907735 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 13:37:24.907745 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 13:37:24.907756 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 13:37:24.907766 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 13:37:24.907777 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 13:37:24.907787 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.907798 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 13:37:24.907808 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.907819 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 13:37:24.907829 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.907840 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 13:37:24.907851 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 13:37:24.907861 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 13:37:24.907879 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 13:37:24.907890 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 13:37:24.907906 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 13:37:24.907917 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 13:37:24.907928 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 13:37:24.907939 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 13:37:24.907949 | orchestrator | 2025-06-02 13:37:24.907960 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-02 13:37:24.907970 | orchestrator | Monday 02 June 2025 13:34:08 +0000 (0:00:06.738) 0:05:34.129 *********** 2025-06-02 13:37:24.907981 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.907992 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.908002 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.908013 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.908023 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.908034 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.908044 | orchestrator | 2025-06-02 13:37:24.908055 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-02 13:37:24.908066 | orchestrator | Monday 02 June 2025 13:34:08 +0000 (0:00:00.491) 0:05:34.621 *********** 2025-06-02 13:37:24.908076 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.908087 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.908097 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.908108 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.908119 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.908129 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.908140 | orchestrator | 2025-06-02 13:37:24.908151 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-02 13:37:24.908161 | orchestrator | Monday 02 June 2025 13:34:09 +0000 (0:00:00.699) 0:05:35.320 *********** 2025-06-02 13:37:24.908172 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.908183 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.908193 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.908204 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.908214 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.908225 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.908235 | orchestrator | 2025-06-02 13:37:24.908246 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-02 13:37:24.908257 | orchestrator | Monday 02 June 2025 13:34:11 +0000 (0:00:02.159) 0:05:37.480 *********** 2025-06-02 13:37:24.908273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.908285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.908330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.908343 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.908361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.908373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.908389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.908401 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.908412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 13:37:24.908436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 13:37:24.908462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.908482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.908502 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.908521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.908540 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.908569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.908600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.908614 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.908625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 13:37:24.908643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 13:37:24.908655 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.908666 | orchestrator | 2025-06-02 13:37:24.908677 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-02 13:37:24.908687 | orchestrator | Monday 02 June 2025 13:34:13 +0000 (0:00:02.372) 0:05:39.852 *********** 2025-06-02 13:37:24.908698 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 13:37:24.908709 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 13:37:24.908720 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.908730 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 13:37:24.908741 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 13:37:24.908752 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.908762 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 13:37:24.908773 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 13:37:24.908784 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.908794 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 13:37:24.908805 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 13:37:24.908815 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.908826 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 13:37:24.908837 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 13:37:24.908848 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.908858 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 13:37:24.908869 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 13:37:24.908880 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.908890 | orchestrator | 2025-06-02 13:37:24.908901 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-02 13:37:24.908912 | orchestrator | Monday 02 June 2025 13:34:14 +0000 (0:00:00.970) 0:05:40.823 *********** 2025-06-02 13:37:24.908927 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.908946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.908964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 13:37:24.908976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.908987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 13:37:24.909142 | orchestrator | 2025-06-02 13:37:24.909153 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 13:37:24.909164 | orchestrator | Monday 02 June 2025 13:34:17 +0000 (0:00:02.864) 0:05:43.687 *********** 2025-06-02 13:37:24.909174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.909185 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.909196 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.909212 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.909223 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.909234 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.909244 | orchestrator | 2025-06-02 13:37:24.909255 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 13:37:24.909266 | orchestrator | Monday 02 June 2025 13:34:18 +0000 (0:00:00.537) 0:05:44.224 *********** 2025-06-02 13:37:24.909276 | orchestrator | 2025-06-02 13:37:24.909287 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 13:37:24.909323 | orchestrator | Monday 02 June 2025 13:34:18 +0000 (0:00:00.330) 0:05:44.555 *********** 2025-06-02 13:37:24.909343 | orchestrator | 2025-06-02 13:37:24.909361 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 13:37:24.909379 | orchestrator | Monday 02 June 2025 13:34:18 +0000 (0:00:00.134) 0:05:44.690 *********** 2025-06-02 13:37:24.909398 | orchestrator | 2025-06-02 13:37:24.909409 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 13:37:24.909420 | orchestrator | Monday 02 June 2025 13:34:18 +0000 (0:00:00.140) 0:05:44.830 *********** 2025-06-02 13:37:24.909430 | orchestrator | 2025-06-02 13:37:24.909441 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 13:37:24.909452 | orchestrator | Monday 02 June 2025 13:34:18 +0000 (0:00:00.117) 0:05:44.948 *********** 2025-06-02 13:37:24.909462 | orchestrator | 2025-06-02 13:37:24.909473 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 13:37:24.909483 | orchestrator | Monday 02 June 2025 13:34:19 +0000 (0:00:00.118) 0:05:45.067 *********** 2025-06-02 13:37:24.909494 | orchestrator | 2025-06-02 13:37:24.909505 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-02 13:37:24.909515 | orchestrator | Monday 02 June 2025 13:34:19 +0000 (0:00:00.187) 0:05:45.254 *********** 2025-06-02 13:37:24.909526 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.909537 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.909551 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.909570 | orchestrator | 2025-06-02 13:37:24.909589 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-02 13:37:24.909610 | orchestrator | Monday 02 June 2025 13:34:28 +0000 (0:00:09.191) 0:05:54.445 *********** 2025-06-02 13:37:24.909628 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.909646 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.909657 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.909667 | orchestrator | 2025-06-02 13:37:24.909678 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-02 13:37:24.909689 | orchestrator | Monday 02 June 2025 13:34:45 +0000 (0:00:17.076) 0:06:11.521 *********** 2025-06-02 13:37:24.909700 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.909719 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.909730 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.909741 | orchestrator | 2025-06-02 13:37:24.909752 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-02 13:37:24.909762 | orchestrator | Monday 02 June 2025 13:35:07 +0000 (0:00:22.289) 0:06:33.810 *********** 2025-06-02 13:37:24.909773 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.909784 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.909794 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.909805 | orchestrator | 2025-06-02 13:37:24.909815 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-02 13:37:24.909826 | orchestrator | Monday 02 June 2025 13:35:51 +0000 (0:00:43.813) 0:07:17.624 *********** 2025-06-02 13:37:24.909837 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.909847 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.909858 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.909868 | orchestrator | 2025-06-02 13:37:24.909879 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-02 13:37:24.909890 | orchestrator | Monday 02 June 2025 13:35:52 +0000 (0:00:01.023) 0:07:18.647 *********** 2025-06-02 13:37:24.909901 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.909911 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.909922 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.909932 | orchestrator | 2025-06-02 13:37:24.909943 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-02 13:37:24.909954 | orchestrator | Monday 02 June 2025 13:35:53 +0000 (0:00:00.790) 0:07:19.437 *********** 2025-06-02 13:37:24.909964 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:37:24.909975 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:37:24.909985 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:37:24.909996 | orchestrator | 2025-06-02 13:37:24.910007 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-02 13:37:24.910046 | orchestrator | Monday 02 June 2025 13:36:17 +0000 (0:00:23.962) 0:07:43.400 *********** 2025-06-02 13:37:24.910067 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.910078 | orchestrator | 2025-06-02 13:37:24.910088 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-02 13:37:24.910099 | orchestrator | Monday 02 June 2025 13:36:17 +0000 (0:00:00.136) 0:07:43.537 *********** 2025-06-02 13:37:24.910110 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.910120 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.910131 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.910142 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.910152 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.910163 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-02 13:37:24.910174 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:37:24.910185 | orchestrator | 2025-06-02 13:37:24.910195 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-02 13:37:24.910206 | orchestrator | Monday 02 June 2025 13:36:39 +0000 (0:00:22.474) 0:08:06.011 *********** 2025-06-02 13:37:24.910217 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.910227 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.910238 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.910249 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.910267 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.910278 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.910288 | orchestrator | 2025-06-02 13:37:24.910322 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-02 13:37:24.910333 | orchestrator | Monday 02 June 2025 13:36:48 +0000 (0:00:08.885) 0:08:14.896 *********** 2025-06-02 13:37:24.910344 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.910355 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.910366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.910376 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.910387 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.910398 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-06-02 13:37:24.910409 | orchestrator | 2025-06-02 13:37:24.910419 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 13:37:24.910430 | orchestrator | Monday 02 June 2025 13:36:52 +0000 (0:00:03.767) 0:08:18.664 *********** 2025-06-02 13:37:24.910441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:37:24.910451 | orchestrator | 2025-06-02 13:37:24.910462 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 13:37:24.910472 | orchestrator | Monday 02 June 2025 13:37:04 +0000 (0:00:11.800) 0:08:30.464 *********** 2025-06-02 13:37:24.910483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:37:24.910494 | orchestrator | 2025-06-02 13:37:24.910504 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-02 13:37:24.910515 | orchestrator | Monday 02 June 2025 13:37:05 +0000 (0:00:01.317) 0:08:31.782 *********** 2025-06-02 13:37:24.910526 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.910536 | orchestrator | 2025-06-02 13:37:24.910547 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-02 13:37:24.910558 | orchestrator | Monday 02 June 2025 13:37:07 +0000 (0:00:01.322) 0:08:33.104 *********** 2025-06-02 13:37:24.910569 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:37:24.910582 | orchestrator | 2025-06-02 13:37:24.910601 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-02 13:37:24.910621 | orchestrator | Monday 02 June 2025 13:37:16 +0000 (0:00:09.529) 0:08:42.634 *********** 2025-06-02 13:37:24.910642 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:37:24.910661 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:37:24.910688 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:37:24.910699 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:24.910710 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:37:24.910720 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:37:24.910731 | orchestrator | 2025-06-02 13:37:24.910742 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-02 13:37:24.910752 | orchestrator | 2025-06-02 13:37:24.910769 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-02 13:37:24.910780 | orchestrator | Monday 02 June 2025 13:37:18 +0000 (0:00:01.661) 0:08:44.295 *********** 2025-06-02 13:37:24.910790 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:24.910801 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:24.910812 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:24.910822 | orchestrator | 2025-06-02 13:37:24.910833 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-02 13:37:24.910843 | orchestrator | 2025-06-02 13:37:24.910854 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-02 13:37:24.910865 | orchestrator | Monday 02 June 2025 13:37:19 +0000 (0:00:01.114) 0:08:45.409 *********** 2025-06-02 13:37:24.910875 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.910886 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.910896 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.910907 | orchestrator | 2025-06-02 13:37:24.910918 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-02 13:37:24.910929 | orchestrator | 2025-06-02 13:37:24.910939 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-02 13:37:24.910950 | orchestrator | Monday 02 June 2025 13:37:19 +0000 (0:00:00.511) 0:08:45.920 *********** 2025-06-02 13:37:24.910961 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-02 13:37:24.910971 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 13:37:24.910982 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 13:37:24.910992 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-02 13:37:24.911003 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-02 13:37:24.911013 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-02 13:37:24.911024 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:37:24.911035 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-02 13:37:24.911045 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 13:37:24.911056 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 13:37:24.911066 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-02 13:37:24.911077 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-02 13:37:24.911087 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-02 13:37:24.911098 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:37:24.911109 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-02 13:37:24.911119 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 13:37:24.911130 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 13:37:24.911141 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-02 13:37:24.911152 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-02 13:37:24.911162 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-02 13:37:24.911173 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:37:24.911184 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-02 13:37:24.911200 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 13:37:24.911211 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 13:37:24.911222 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-02 13:37:24.911239 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-02 13:37:24.911250 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-02 13:37:24.911261 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.911271 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-02 13:37:24.911282 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 13:37:24.911316 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 13:37:24.911332 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-02 13:37:24.911343 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-02 13:37:24.911353 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-02 13:37:24.911364 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.911374 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-02 13:37:24.911385 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 13:37:24.911395 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 13:37:24.911406 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-02 13:37:24.911416 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-02 13:37:24.911427 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-02 13:37:24.911437 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.911448 | orchestrator | 2025-06-02 13:37:24.911459 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-02 13:37:24.911469 | orchestrator | 2025-06-02 13:37:24.911480 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-02 13:37:24.911491 | orchestrator | Monday 02 June 2025 13:37:21 +0000 (0:00:01.285) 0:08:47.206 *********** 2025-06-02 13:37:24.911501 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-02 13:37:24.911512 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-02 13:37:24.911522 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.911533 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-02 13:37:24.911544 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-02 13:37:24.911554 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.911564 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-02 13:37:24.911580 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-02 13:37:24.911591 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.911602 | orchestrator | 2025-06-02 13:37:24.911615 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-02 13:37:24.911633 | orchestrator | 2025-06-02 13:37:24.911653 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-02 13:37:24.911671 | orchestrator | Monday 02 June 2025 13:37:21 +0000 (0:00:00.739) 0:08:47.945 *********** 2025-06-02 13:37:24.911690 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.911710 | orchestrator | 2025-06-02 13:37:24.911729 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-02 13:37:24.911747 | orchestrator | 2025-06-02 13:37:24.911763 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-02 13:37:24.911774 | orchestrator | Monday 02 June 2025 13:37:22 +0000 (0:00:00.627) 0:08:48.573 *********** 2025-06-02 13:37:24.911784 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:24.911795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:24.911806 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:24.911816 | orchestrator | 2025-06-02 13:37:24.911827 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:37:24.911838 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:37:24.911849 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-02 13:37:24.911868 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 13:37:24.911879 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 13:37:24.911890 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 13:37:24.911901 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 13:37:24.911912 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 13:37:24.911922 | orchestrator | 2025-06-02 13:37:24.911933 | orchestrator | 2025-06-02 13:37:24.911944 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:37:24.911955 | orchestrator | Monday 02 June 2025 13:37:22 +0000 (0:00:00.430) 0:08:49.004 *********** 2025-06-02 13:37:24.911966 | orchestrator | =============================================================================== 2025-06-02 13:37:24.911984 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 43.81s 2025-06-02 13:37:24.911995 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.50s 2025-06-02 13:37:24.912005 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.96s 2025-06-02 13:37:24.912016 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.08s 2025-06-02 13:37:24.912027 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.47s 2025-06-02 13:37:24.912038 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.29s 2025-06-02 13:37:24.912048 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.22s 2025-06-02 13:37:24.912059 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.73s 2025-06-02 13:37:24.912069 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.08s 2025-06-02 13:37:24.912080 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.80s 2025-06-02 13:37:24.912090 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 11.65s 2025-06-02 13:37:24.912104 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.28s 2025-06-02 13:37:24.912122 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.24s 2025-06-02 13:37:24.912139 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.85s 2025-06-02 13:37:24.912158 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.81s 2025-06-02 13:37:24.912173 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.53s 2025-06-02 13:37:24.912184 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.19s 2025-06-02 13:37:24.912195 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.89s 2025-06-02 13:37:24.912206 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.86s 2025-06-02 13:37:24.912217 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.78s 2025-06-02 13:37:24.912227 | orchestrator | 2025-06-02 13:37:24 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:24.912238 | orchestrator | 2025-06-02 13:37:24 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:24.912255 | orchestrator | 2025-06-02 13:37:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:27.943113 | orchestrator | 2025-06-02 13:37:27 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:27.944839 | orchestrator | 2025-06-02 13:37:27 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:27.944874 | orchestrator | 2025-06-02 13:37:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:30.992847 | orchestrator | 2025-06-02 13:37:30 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:30.994013 | orchestrator | 2025-06-02 13:37:30 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:30.994089 | orchestrator | 2025-06-02 13:37:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:34.035669 | orchestrator | 2025-06-02 13:37:34 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:34.038129 | orchestrator | 2025-06-02 13:37:34 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:34.038162 | orchestrator | 2025-06-02 13:37:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:37.100497 | orchestrator | 2025-06-02 13:37:37 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state STARTED 2025-06-02 13:37:37.102179 | orchestrator | 2025-06-02 13:37:37 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:37.102213 | orchestrator | 2025-06-02 13:37:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:40.148397 | orchestrator | 2025-06-02 13:37:40 | INFO  | Task 65bf26fe-d7e7-4feb-a301-9baa37002bc5 is in state SUCCESS 2025-06-02 13:37:40.149393 | orchestrator | 2025-06-02 13:37:40.149517 | orchestrator | 2025-06-02 13:37:40.149532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:37:40.149544 | orchestrator | 2025-06-02 13:37:40.149555 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:37:40.149566 | orchestrator | Monday 02 June 2025 13:35:17 +0000 (0:00:00.346) 0:00:00.346 *********** 2025-06-02 13:37:40.149577 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:40.149590 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:37:40.149600 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:37:40.149611 | orchestrator | 2025-06-02 13:37:40.149622 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:37:40.149633 | orchestrator | Monday 02 June 2025 13:35:17 +0000 (0:00:00.545) 0:00:00.892 *********** 2025-06-02 13:37:40.149644 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-02 13:37:40.149656 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-02 13:37:40.149667 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-02 13:37:40.149677 | orchestrator | 2025-06-02 13:37:40.149688 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-02 13:37:40.149699 | orchestrator | 2025-06-02 13:37:40.149710 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 13:37:40.150211 | orchestrator | Monday 02 June 2025 13:35:18 +0000 (0:00:00.694) 0:00:01.586 *********** 2025-06-02 13:37:40.150232 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:40.150244 | orchestrator | 2025-06-02 13:37:40.150255 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-02 13:37:40.150266 | orchestrator | Monday 02 June 2025 13:35:19 +0000 (0:00:00.604) 0:00:02.191 *********** 2025-06-02 13:37:40.150281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.150354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.150664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.150695 | orchestrator | 2025-06-02 13:37:40.150707 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-02 13:37:40.150718 | orchestrator | Monday 02 June 2025 13:35:19 +0000 (0:00:00.733) 0:00:02.925 *********** 2025-06-02 13:37:40.150728 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-02 13:37:40.150739 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-02 13:37:40.150750 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:37:40.150761 | orchestrator | 2025-06-02 13:37:40.150772 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 13:37:40.150782 | orchestrator | Monday 02 June 2025 13:35:20 +0000 (0:00:00.907) 0:00:03.832 *********** 2025-06-02 13:37:40.150793 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:37:40.150804 | orchestrator | 2025-06-02 13:37:40.150814 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-02 13:37:40.150825 | orchestrator | Monday 02 June 2025 13:35:21 +0000 (0:00:00.745) 0:00:04.577 *********** 2025-06-02 13:37:40.150878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.150893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.150935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.150947 | orchestrator | 2025-06-02 13:37:40.150958 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-02 13:37:40.150969 | orchestrator | Monday 02 June 2025 13:35:22 +0000 (0:00:01.460) 0:00:06.038 *********** 2025-06-02 13:37:40.150981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:37:40.150992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:37:40.151004 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:40.151015 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:40.151100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:37:40.151116 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:40.151127 | orchestrator | 2025-06-02 13:37:40.151137 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-02 13:37:40.151148 | orchestrator | Monday 02 June 2025 13:35:23 +0000 (0:00:00.381) 0:00:06.419 *********** 2025-06-02 13:37:40.151159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:37:40.151178 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:40.151189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:37:40.151200 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:40.151211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 13:37:40.151226 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:40.151237 | orchestrator | 2025-06-02 13:37:40.151248 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-02 13:37:40.151259 | orchestrator | Monday 02 June 2025 13:35:24 +0000 (0:00:00.840) 0:00:07.260 *********** 2025-06-02 13:37:40.151272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.151357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.151377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.151399 | orchestrator | 2025-06-02 13:37:40.151411 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-02 13:37:40.151423 | orchestrator | Monday 02 June 2025 13:35:25 +0000 (0:00:01.284) 0:00:08.545 *********** 2025-06-02 13:37:40.151435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.151448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.151466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.151479 | orchestrator | 2025-06-02 13:37:40.151490 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-02 13:37:40.151502 | orchestrator | Monday 02 June 2025 13:35:26 +0000 (0:00:01.420) 0:00:09.965 *********** 2025-06-02 13:37:40.151514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:40.151527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:40.151539 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:40.151550 | orchestrator | 2025-06-02 13:37:40.151562 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-02 13:37:40.151574 | orchestrator | Monday 02 June 2025 13:35:27 +0000 (0:00:00.541) 0:00:10.507 *********** 2025-06-02 13:37:40.151586 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 13:37:40.151599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 13:37:40.151611 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 13:37:40.151623 | orchestrator | 2025-06-02 13:37:40.151633 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-02 13:37:40.151644 | orchestrator | Monday 02 June 2025 13:35:28 +0000 (0:00:01.219) 0:00:11.726 *********** 2025-06-02 13:37:40.151661 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 13:37:40.151705 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 13:37:40.151718 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 13:37:40.151729 | orchestrator | 2025-06-02 13:37:40.151739 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-02 13:37:40.151750 | orchestrator | Monday 02 June 2025 13:35:29 +0000 (0:00:01.269) 0:00:12.995 *********** 2025-06-02 13:37:40.151760 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 13:37:40.151771 | orchestrator | 2025-06-02 13:37:40.151782 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-02 13:37:40.151792 | orchestrator | Monday 02 June 2025 13:35:30 +0000 (0:00:00.745) 0:00:13.740 *********** 2025-06-02 13:37:40.151873 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-02 13:37:40.151886 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-02 13:37:40.151896 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:40.151907 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:37:40.151917 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:37:40.151928 | orchestrator | 2025-06-02 13:37:40.151939 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-02 13:37:40.151949 | orchestrator | Monday 02 June 2025 13:35:31 +0000 (0:00:00.674) 0:00:14.415 *********** 2025-06-02 13:37:40.151960 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:40.151971 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:40.151981 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:40.151992 | orchestrator | 2025-06-02 13:37:40.152002 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-02 13:37:40.152013 | orchestrator | Monday 02 June 2025 13:35:31 +0000 (0:00:00.563) 0:00:14.979 *********** 2025-06-02 13:37:40.152024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1061247, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.927896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1061247, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.927896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1061247, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.927896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1061231, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.922896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1061231, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.922896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1061231, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.922896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1061220, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9208958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1061220, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9208958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1061220, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9208958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1061241, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9248958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1061241, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9248958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1061241, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9248958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1061206, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9178958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1061206, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9178958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1061206, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9178958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1061223, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9218957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1061223, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9218957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1061223, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9218957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1061238, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9248958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1061238, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9248958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1061238, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9248958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1061205, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9168959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1061205, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9168959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1061205, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9168959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1061191, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1061191, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1061191, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9118958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1061207, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9178958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1061207, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9178958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1061207, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9178958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1061200, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9158957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1061200, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9158957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1061200, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9158957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1061236, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9238958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1061236, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9238958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1061236, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9238958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1061209, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9188957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1061209, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9188957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1061209, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9188957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1061244, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.925896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1061244, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.925896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1061244, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.925896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1061204, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9168959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1061204, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9168959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1061204, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9168959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1061228, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.922896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1061228, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.922896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1061228, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.922896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1061192, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9148958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1061192, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9148958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1061192, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9148958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1061202, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9158957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1061202, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9158957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.152997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1061202, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9158957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1061215, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.919896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1061215, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.919896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1061215, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.919896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1061295, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9528964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1061295, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9528964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1061295, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9528964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1061283, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9428961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1061283, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9428961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1061283, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9428961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1061251, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.927896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1061251, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.927896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1061251, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.927896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1061323, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9578965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1061323, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9578965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1061323, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9578965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1061254, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.928896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1061254, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.928896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1061254, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.928896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1061319, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9558964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1061319, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9558964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1061319, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9558964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1061325, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9598963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1061325, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9598963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1061325, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9598963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1061309, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9538963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1061309, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9538963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1061309, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9538963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1061314, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9558964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1061314, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9558964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1061314, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9558964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1061256, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.928896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1061256, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.928896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1061256, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.928896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1061289, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9428961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1061289, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9428961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1061289, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9428961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1061331, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9608965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1061331, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9608965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1061331, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9608965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1061321, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9568963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1061321, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9568963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1061321, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9568963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1061259, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.930896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1061259, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.930896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1061259, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.930896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1061258, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9298959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1061258, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9298959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1061258, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9298959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1061265, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.932896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1061265, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.932896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1061265, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.932896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1061268, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9388962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1061268, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9388962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1061268, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9388962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1061291, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.943896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1061291, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.943896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1061291, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.943896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1061313, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9548965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1061313, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9548965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1061313, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9548965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1061293, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.943896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1061293, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.943896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1061293, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.943896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.153967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1061338, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9628963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.154004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1061338, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9628963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.154087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1061338, 'dev': 117, 'nlink': 1, 'atime': 1748724126.0, 'mtime': 1748724126.0, 'ctime': 1748868669.9628963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 13:37:40.154103 | orchestrator | 2025-06-02 13:37:40.154115 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-02 13:37:40.154126 | orchestrator | Monday 02 June 2025 13:36:08 +0000 (0:00:36.231) 0:00:51.210 *********** 2025-06-02 13:37:40.154137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.154155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.154167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 13:37:40.154185 | orchestrator | 2025-06-02 13:37:40.154196 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-02 13:37:40.154207 | orchestrator | Monday 02 June 2025 13:36:09 +0000 (0:00:00.989) 0:00:52.200 *********** 2025-06-02 13:37:40.154218 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:40.154228 | orchestrator | 2025-06-02 13:37:40.154239 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-02 13:37:40.154257 | orchestrator | Monday 02 June 2025 13:36:11 +0000 (0:00:02.157) 0:00:54.357 *********** 2025-06-02 13:37:40.154268 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:40.154278 | orchestrator | 2025-06-02 13:37:40.154316 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 13:37:40.154327 | orchestrator | Monday 02 June 2025 13:36:13 +0000 (0:00:02.192) 0:00:56.550 *********** 2025-06-02 13:37:40.154338 | orchestrator | 2025-06-02 13:37:40.154349 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 13:37:40.154359 | orchestrator | Monday 02 June 2025 13:36:13 +0000 (0:00:00.257) 0:00:56.808 *********** 2025-06-02 13:37:40.154370 | orchestrator | 2025-06-02 13:37:40.154380 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 13:37:40.154391 | orchestrator | Monday 02 June 2025 13:36:13 +0000 (0:00:00.063) 0:00:56.871 *********** 2025-06-02 13:37:40.154401 | orchestrator | 2025-06-02 13:37:40.154412 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-02 13:37:40.154423 | orchestrator | Monday 02 June 2025 13:36:13 +0000 (0:00:00.063) 0:00:56.935 *********** 2025-06-02 13:37:40.154433 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:40.154444 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:40.154454 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:37:40.154465 | orchestrator | 2025-06-02 13:37:40.154475 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-02 13:37:40.154486 | orchestrator | Monday 02 June 2025 13:36:15 +0000 (0:00:01.781) 0:00:58.716 *********** 2025-06-02 13:37:40.154497 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:40.154508 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:40.154519 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-02 13:37:40.154529 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-02 13:37:40.154540 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-02 13:37:40.154551 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2025-06-02 13:37:40.154561 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:40.154572 | orchestrator | 2025-06-02 13:37:40.154583 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-02 13:37:40.154593 | orchestrator | Monday 02 June 2025 13:37:05 +0000 (0:00:50.077) 0:01:48.794 *********** 2025-06-02 13:37:40.154604 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:40.154614 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:37:40.154625 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:37:40.154635 | orchestrator | 2025-06-02 13:37:40.154646 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-02 13:37:40.154657 | orchestrator | Monday 02 June 2025 13:37:33 +0000 (0:00:27.926) 0:02:16.720 *********** 2025-06-02 13:37:40.154667 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:37:40.154678 | orchestrator | 2025-06-02 13:37:40.154689 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-02 13:37:40.154699 | orchestrator | Monday 02 June 2025 13:37:36 +0000 (0:00:02.380) 0:02:19.101 *********** 2025-06-02 13:37:40.154710 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:40.154727 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:37:40.154738 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:37:40.154749 | orchestrator | 2025-06-02 13:37:40.154759 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-02 13:37:40.154770 | orchestrator | Monday 02 June 2025 13:37:36 +0000 (0:00:00.313) 0:02:19.414 *********** 2025-06-02 13:37:40.154787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-02 13:37:40.154799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-02 13:37:40.154811 | orchestrator | 2025-06-02 13:37:40.154822 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-02 13:37:40.154832 | orchestrator | Monday 02 June 2025 13:37:38 +0000 (0:00:02.350) 0:02:21.765 *********** 2025-06-02 13:37:40.154843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:37:40.154854 | orchestrator | 2025-06-02 13:37:40.154864 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:37:40.154876 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 13:37:40.154887 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 13:37:40.154898 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 13:37:40.154908 | orchestrator | 2025-06-02 13:37:40.154919 | orchestrator | 2025-06-02 13:37:40.154929 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:37:40.154940 | orchestrator | Monday 02 June 2025 13:37:38 +0000 (0:00:00.246) 0:02:22.011 *********** 2025-06-02 13:37:40.154956 | orchestrator | =============================================================================== 2025-06-02 13:37:40.154967 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.08s 2025-06-02 13:37:40.154978 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.23s 2025-06-02 13:37:40.154988 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.93s 2025-06-02 13:37:40.154999 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.38s 2025-06-02 13:37:40.155010 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.35s 2025-06-02 13:37:40.155020 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.19s 2025-06-02 13:37:40.155031 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.16s 2025-06-02 13:37:40.155041 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.78s 2025-06-02 13:37:40.155052 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.46s 2025-06-02 13:37:40.155062 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.42s 2025-06-02 13:37:40.155073 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.28s 2025-06-02 13:37:40.155083 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2025-06-02 13:37:40.155094 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2025-06-02 13:37:40.155104 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.99s 2025-06-02 13:37:40.155130 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.91s 2025-06-02 13:37:40.155148 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.84s 2025-06-02 13:37:40.155158 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2025-06-02 13:37:40.155169 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2025-06-02 13:37:40.155180 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.73s 2025-06-02 13:37:40.155190 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-06-02 13:37:40.155201 | orchestrator | 2025-06-02 13:37:40 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:40.155212 | orchestrator | 2025-06-02 13:37:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:43.187032 | orchestrator | 2025-06-02 13:37:43 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:43.187139 | orchestrator | 2025-06-02 13:37:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:46.233233 | orchestrator | 2025-06-02 13:37:46 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:46.233397 | orchestrator | 2025-06-02 13:37:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:49.268426 | orchestrator | 2025-06-02 13:37:49 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:49.268556 | orchestrator | 2025-06-02 13:37:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:52.314257 | orchestrator | 2025-06-02 13:37:52 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:52.314413 | orchestrator | 2025-06-02 13:37:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:55.347079 | orchestrator | 2025-06-02 13:37:55 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:55.347197 | orchestrator | 2025-06-02 13:37:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:37:58.389266 | orchestrator | 2025-06-02 13:37:58 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:37:58.389413 | orchestrator | 2025-06-02 13:37:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:01.434220 | orchestrator | 2025-06-02 13:38:01 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:01.434414 | orchestrator | 2025-06-02 13:38:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:04.483643 | orchestrator | 2025-06-02 13:38:04 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:04.483744 | orchestrator | 2025-06-02 13:38:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:07.526735 | orchestrator | 2025-06-02 13:38:07 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:07.526839 | orchestrator | 2025-06-02 13:38:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:10.572242 | orchestrator | 2025-06-02 13:38:10 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:10.572391 | orchestrator | 2025-06-02 13:38:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:13.624141 | orchestrator | 2025-06-02 13:38:13 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:13.624247 | orchestrator | 2025-06-02 13:38:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:16.687101 | orchestrator | 2025-06-02 13:38:16 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:16.688112 | orchestrator | 2025-06-02 13:38:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:19.728122 | orchestrator | 2025-06-02 13:38:19 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:19.728251 | orchestrator | 2025-06-02 13:38:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:22.786438 | orchestrator | 2025-06-02 13:38:22 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:22.786552 | orchestrator | 2025-06-02 13:38:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:25.836846 | orchestrator | 2025-06-02 13:38:25 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:25.836946 | orchestrator | 2025-06-02 13:38:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:28.885047 | orchestrator | 2025-06-02 13:38:28 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:28.886169 | orchestrator | 2025-06-02 13:38:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:31.932694 | orchestrator | 2025-06-02 13:38:31 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:31.932798 | orchestrator | 2025-06-02 13:38:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:34.982106 | orchestrator | 2025-06-02 13:38:34 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:34.982243 | orchestrator | 2025-06-02 13:38:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:38.047791 | orchestrator | 2025-06-02 13:38:38 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:38.047932 | orchestrator | 2025-06-02 13:38:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:41.098391 | orchestrator | 2025-06-02 13:38:41 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:41.098494 | orchestrator | 2025-06-02 13:38:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:44.143213 | orchestrator | 2025-06-02 13:38:44 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:44.143353 | orchestrator | 2025-06-02 13:38:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:47.189776 | orchestrator | 2025-06-02 13:38:47 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:47.189860 | orchestrator | 2025-06-02 13:38:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:50.251181 | orchestrator | 2025-06-02 13:38:50 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:50.251317 | orchestrator | 2025-06-02 13:38:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:53.319785 | orchestrator | 2025-06-02 13:38:53 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:53.319891 | orchestrator | 2025-06-02 13:38:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:56.362277 | orchestrator | 2025-06-02 13:38:56 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:56.362382 | orchestrator | 2025-06-02 13:38:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:38:59.402498 | orchestrator | 2025-06-02 13:38:59 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:38:59.402605 | orchestrator | 2025-06-02 13:38:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:02.454299 | orchestrator | 2025-06-02 13:39:02 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:02.454399 | orchestrator | 2025-06-02 13:39:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:05.505657 | orchestrator | 2025-06-02 13:39:05 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:05.505768 | orchestrator | 2025-06-02 13:39:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:08.554768 | orchestrator | 2025-06-02 13:39:08 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:08.554871 | orchestrator | 2025-06-02 13:39:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:11.598458 | orchestrator | 2025-06-02 13:39:11 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:11.598581 | orchestrator | 2025-06-02 13:39:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:14.644079 | orchestrator | 2025-06-02 13:39:14 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:14.644187 | orchestrator | 2025-06-02 13:39:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:17.687672 | orchestrator | 2025-06-02 13:39:17 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:17.687780 | orchestrator | 2025-06-02 13:39:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:20.730382 | orchestrator | 2025-06-02 13:39:20 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:20.730487 | orchestrator | 2025-06-02 13:39:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:23.771545 | orchestrator | 2025-06-02 13:39:23 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:23.771645 | orchestrator | 2025-06-02 13:39:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:26.824701 | orchestrator | 2025-06-02 13:39:26 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:26.824802 | orchestrator | 2025-06-02 13:39:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:29.874397 | orchestrator | 2025-06-02 13:39:29 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:29.874503 | orchestrator | 2025-06-02 13:39:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:32.912630 | orchestrator | 2025-06-02 13:39:32 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:32.912756 | orchestrator | 2025-06-02 13:39:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:35.962107 | orchestrator | 2025-06-02 13:39:35 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:35.962297 | orchestrator | 2025-06-02 13:39:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:39.029571 | orchestrator | 2025-06-02 13:39:39 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:39.029672 | orchestrator | 2025-06-02 13:39:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:42.080782 | orchestrator | 2025-06-02 13:39:42 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:42.080889 | orchestrator | 2025-06-02 13:39:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:45.126543 | orchestrator | 2025-06-02 13:39:45 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:45.126642 | orchestrator | 2025-06-02 13:39:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:48.170723 | orchestrator | 2025-06-02 13:39:48 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:48.170829 | orchestrator | 2025-06-02 13:39:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:51.221252 | orchestrator | 2025-06-02 13:39:51 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:51.221355 | orchestrator | 2025-06-02 13:39:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:54.269527 | orchestrator | 2025-06-02 13:39:54 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:54.269644 | orchestrator | 2025-06-02 13:39:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:39:57.313481 | orchestrator | 2025-06-02 13:39:57 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state STARTED 2025-06-02 13:39:57.313579 | orchestrator | 2025-06-02 13:39:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 13:40:00.364008 | orchestrator | 2025-06-02 13:40:00 | INFO  | Task 06914525-0e2a-4848-bdd5-cf1e8e802a52 is in state SUCCESS 2025-06-02 13:40:00.365196 | orchestrator | 2025-06-02 13:40:00.365457 | orchestrator | 2025-06-02 13:40:00.365477 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:40:00.365491 | orchestrator | 2025-06-02 13:40:00.365882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:40:00.365902 | orchestrator | Monday 02 June 2025 13:35:27 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-02 13:40:00.365913 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.365926 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:40:00.365937 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:40:00.365949 | orchestrator | 2025-06-02 13:40:00.365960 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:40:00.365972 | orchestrator | Monday 02 June 2025 13:35:27 +0000 (0:00:00.312) 0:00:00.571 *********** 2025-06-02 13:40:00.365983 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-02 13:40:00.365996 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-02 13:40:00.366008 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-02 13:40:00.366074 | orchestrator | 2025-06-02 13:40:00.366182 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-02 13:40:00.366494 | orchestrator | 2025-06-02 13:40:00.366511 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 13:40:00.366523 | orchestrator | Monday 02 June 2025 13:35:28 +0000 (0:00:00.472) 0:00:01.043 *********** 2025-06-02 13:40:00.366534 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:40:00.366545 | orchestrator | 2025-06-02 13:40:00.366556 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-02 13:40:00.366567 | orchestrator | Monday 02 June 2025 13:35:28 +0000 (0:00:00.534) 0:00:01.578 *********** 2025-06-02 13:40:00.366578 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-02 13:40:00.366589 | orchestrator | 2025-06-02 13:40:00.366600 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-02 13:40:00.366611 | orchestrator | Monday 02 June 2025 13:35:31 +0000 (0:00:03.266) 0:00:04.844 *********** 2025-06-02 13:40:00.366621 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-02 13:40:00.366632 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-02 13:40:00.366643 | orchestrator | 2025-06-02 13:40:00.366654 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-02 13:40:00.366665 | orchestrator | Monday 02 June 2025 13:35:38 +0000 (0:00:06.455) 0:00:11.300 *********** 2025-06-02 13:40:00.366676 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 13:40:00.366687 | orchestrator | 2025-06-02 13:40:00.366697 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-02 13:40:00.366708 | orchestrator | Monday 02 June 2025 13:35:41 +0000 (0:00:03.245) 0:00:14.545 *********** 2025-06-02 13:40:00.366748 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 13:40:00.366760 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 13:40:00.366771 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 13:40:00.366782 | orchestrator | 2025-06-02 13:40:00.366792 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-02 13:40:00.366803 | orchestrator | Monday 02 June 2025 13:35:49 +0000 (0:00:08.291) 0:00:22.837 *********** 2025-06-02 13:40:00.366814 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 13:40:00.366825 | orchestrator | 2025-06-02 13:40:00.366836 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-02 13:40:00.366847 | orchestrator | Monday 02 June 2025 13:35:53 +0000 (0:00:03.434) 0:00:26.272 *********** 2025-06-02 13:40:00.366857 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 13:40:00.366868 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 13:40:00.366879 | orchestrator | 2025-06-02 13:40:00.366889 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-02 13:40:00.366900 | orchestrator | Monday 02 June 2025 13:36:00 +0000 (0:00:07.465) 0:00:33.737 *********** 2025-06-02 13:40:00.366910 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-02 13:40:00.366921 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-02 13:40:00.366932 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-02 13:40:00.366942 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-02 13:40:00.366967 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-02 13:40:00.366978 | orchestrator | 2025-06-02 13:40:00.366989 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 13:40:00.366999 | orchestrator | Monday 02 June 2025 13:36:16 +0000 (0:00:15.524) 0:00:49.262 *********** 2025-06-02 13:40:00.367010 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:40:00.367020 | orchestrator | 2025-06-02 13:40:00.367031 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-02 13:40:00.367042 | orchestrator | Monday 02 June 2025 13:36:16 +0000 (0:00:00.551) 0:00:49.814 *********** 2025-06-02 13:40:00.367052 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367063 | orchestrator | 2025-06-02 13:40:00.367074 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-02 13:40:00.367084 | orchestrator | Monday 02 June 2025 13:36:21 +0000 (0:00:05.028) 0:00:54.843 *********** 2025-06-02 13:40:00.367095 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367105 | orchestrator | 2025-06-02 13:40:00.367118 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 13:40:00.367177 | orchestrator | Monday 02 June 2025 13:36:25 +0000 (0:00:03.516) 0:00:58.359 *********** 2025-06-02 13:40:00.367192 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.367205 | orchestrator | 2025-06-02 13:40:00.367241 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-02 13:40:00.367254 | orchestrator | Monday 02 June 2025 13:36:28 +0000 (0:00:02.799) 0:01:01.159 *********** 2025-06-02 13:40:00.367267 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 13:40:00.367279 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 13:40:00.367292 | orchestrator | 2025-06-02 13:40:00.367304 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-02 13:40:00.367316 | orchestrator | Monday 02 June 2025 13:36:39 +0000 (0:00:11.522) 0:01:12.682 *********** 2025-06-02 13:40:00.367329 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-02 13:40:00.367342 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-02 13:40:00.367365 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-02 13:40:00.367377 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-02 13:40:00.367388 | orchestrator | 2025-06-02 13:40:00.367398 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-02 13:40:00.367409 | orchestrator | Monday 02 June 2025 13:36:56 +0000 (0:00:17.135) 0:01:29.817 *********** 2025-06-02 13:40:00.367420 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367430 | orchestrator | 2025-06-02 13:40:00.367441 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-02 13:40:00.367451 | orchestrator | Monday 02 June 2025 13:37:01 +0000 (0:00:04.660) 0:01:34.478 *********** 2025-06-02 13:40:00.367462 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367472 | orchestrator | 2025-06-02 13:40:00.367483 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-02 13:40:00.367494 | orchestrator | Monday 02 June 2025 13:37:06 +0000 (0:00:05.428) 0:01:39.907 *********** 2025-06-02 13:40:00.367504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:40:00.367515 | orchestrator | 2025-06-02 13:40:00.367525 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-02 13:40:00.367536 | orchestrator | Monday 02 June 2025 13:37:07 +0000 (0:00:00.250) 0:01:40.157 *********** 2025-06-02 13:40:00.367547 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367557 | orchestrator | 2025-06-02 13:40:00.367568 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 13:40:00.367578 | orchestrator | Monday 02 June 2025 13:37:11 +0000 (0:00:04.701) 0:01:44.859 *********** 2025-06-02 13:40:00.367589 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:40:00.367600 | orchestrator | 2025-06-02 13:40:00.367611 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-02 13:40:00.367621 | orchestrator | Monday 02 June 2025 13:37:13 +0000 (0:00:01.273) 0:01:46.132 *********** 2025-06-02 13:40:00.367631 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.367642 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.367653 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367663 | orchestrator | 2025-06-02 13:40:00.367674 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-02 13:40:00.367684 | orchestrator | Monday 02 June 2025 13:37:17 +0000 (0:00:04.762) 0:01:50.895 *********** 2025-06-02 13:40:00.367695 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367705 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.367716 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.367726 | orchestrator | 2025-06-02 13:40:00.367737 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-02 13:40:00.367748 | orchestrator | Monday 02 June 2025 13:37:22 +0000 (0:00:04.273) 0:01:55.169 *********** 2025-06-02 13:40:00.367758 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367769 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.367779 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.367790 | orchestrator | 2025-06-02 13:40:00.367800 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-02 13:40:00.367811 | orchestrator | Monday 02 June 2025 13:37:23 +0000 (0:00:00.805) 0:01:55.974 *********** 2025-06-02 13:40:00.367821 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:40:00.367838 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:40:00.367850 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.367861 | orchestrator | 2025-06-02 13:40:00.367872 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-02 13:40:00.367889 | orchestrator | Monday 02 June 2025 13:37:24 +0000 (0:00:01.861) 0:01:57.835 *********** 2025-06-02 13:40:00.367899 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.367910 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.367920 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367931 | orchestrator | 2025-06-02 13:40:00.367941 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-02 13:40:00.367952 | orchestrator | Monday 02 June 2025 13:37:26 +0000 (0:00:01.233) 0:01:59.068 *********** 2025-06-02 13:40:00.367963 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.367973 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.367984 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.367994 | orchestrator | 2025-06-02 13:40:00.368005 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-02 13:40:00.368015 | orchestrator | Monday 02 June 2025 13:37:27 +0000 (0:00:01.220) 0:02:00.289 *********** 2025-06-02 13:40:00.368026 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.368037 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.368047 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.368058 | orchestrator | 2025-06-02 13:40:00.368103 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-02 13:40:00.368116 | orchestrator | Monday 02 June 2025 13:37:29 +0000 (0:00:01.943) 0:02:02.232 *********** 2025-06-02 13:40:00.368127 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.368137 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.368148 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.368158 | orchestrator | 2025-06-02 13:40:00.368169 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-02 13:40:00.368180 | orchestrator | Monday 02 June 2025 13:37:31 +0000 (0:00:01.734) 0:02:03.967 *********** 2025-06-02 13:40:00.368190 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.368201 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:40:00.368257 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:40:00.368269 | orchestrator | 2025-06-02 13:40:00.368280 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-02 13:40:00.368291 | orchestrator | Monday 02 June 2025 13:37:31 +0000 (0:00:00.644) 0:02:04.611 *********** 2025-06-02 13:40:00.368302 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:40:00.368312 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.368323 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:40:00.368334 | orchestrator | 2025-06-02 13:40:00.368344 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 13:40:00.368355 | orchestrator | Monday 02 June 2025 13:37:34 +0000 (0:00:02.977) 0:02:07.589 *********** 2025-06-02 13:40:00.368366 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:40:00.368377 | orchestrator | 2025-06-02 13:40:00.368388 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-02 13:40:00.368398 | orchestrator | Monday 02 June 2025 13:37:35 +0000 (0:00:00.780) 0:02:08.370 *********** 2025-06-02 13:40:00.368409 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.368420 | orchestrator | 2025-06-02 13:40:00.368431 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 13:40:00.368442 | orchestrator | Monday 02 June 2025 13:37:39 +0000 (0:00:03.622) 0:02:11.992 *********** 2025-06-02 13:40:00.368452 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.368463 | orchestrator | 2025-06-02 13:40:00.368474 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-02 13:40:00.368485 | orchestrator | Monday 02 June 2025 13:37:42 +0000 (0:00:03.298) 0:02:15.291 *********** 2025-06-02 13:40:00.368495 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 13:40:00.368506 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 13:40:00.368517 | orchestrator | 2025-06-02 13:40:00.368528 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-02 13:40:00.368547 | orchestrator | Monday 02 June 2025 13:37:48 +0000 (0:00:06.556) 0:02:21.847 *********** 2025-06-02 13:40:00.368558 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.368569 | orchestrator | 2025-06-02 13:40:00.368579 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-02 13:40:00.368590 | orchestrator | Monday 02 June 2025 13:37:52 +0000 (0:00:03.241) 0:02:25.088 *********** 2025-06-02 13:40:00.368601 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:40:00.368612 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:40:00.368622 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:40:00.368633 | orchestrator | 2025-06-02 13:40:00.368644 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-02 13:40:00.368655 | orchestrator | Monday 02 June 2025 13:37:52 +0000 (0:00:00.312) 0:02:25.401 *********** 2025-06-02 13:40:00.368674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.368726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.368741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.368754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.368774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.368785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.368802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.368983 | orchestrator | 2025-06-02 13:40:00.368994 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-02 13:40:00.369005 | orchestrator | Monday 02 June 2025 13:37:55 +0000 (0:00:02.572) 0:02:27.974 *********** 2025-06-02 13:40:00.369016 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:40:00.369027 | orchestrator | 2025-06-02 13:40:00.369038 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-02 13:40:00.369049 | orchestrator | Monday 02 June 2025 13:37:55 +0000 (0:00:00.332) 0:02:28.306 *********** 2025-06-02 13:40:00.369060 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:40:00.369071 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:40:00.369082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:40:00.369093 | orchestrator | 2025-06-02 13:40:00.369104 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-02 13:40:00.369114 | orchestrator | Monday 02 June 2025 13:37:55 +0000 (0:00:00.290) 0:02:28.597 *********** 2025-06-02 13:40:00.369134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.369146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.369157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.369197 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:40:00.369265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.369287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.369298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.369344 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:40:00.369386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.369400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.369419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.369453 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:40:00.369464 | orchestrator | 2025-06-02 13:40:00.369475 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 13:40:00.369486 | orchestrator | Monday 02 June 2025 13:37:56 +0000 (0:00:00.647) 0:02:29.245 *********** 2025-06-02 13:40:00.369496 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:40:00.369507 | orchestrator | 2025-06-02 13:40:00.369518 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-02 13:40:00.369528 | orchestrator | Monday 02 June 2025 13:37:56 +0000 (0:00:00.514) 0:02:29.760 *********** 2025-06-02 13:40:00.369544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.369596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.369611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.369623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.369634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.369651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.369663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.369801 | orchestrator | 2025-06-02 13:40:00.369817 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-02 13:40:00.369830 | orchestrator | Monday 02 June 2025 13:38:01 +0000 (0:00:04.896) 0:02:34.656 *********** 2025-06-02 13:40:00.369850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.369869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.369888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.369972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.369993 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:40:00.370012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.370076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.370088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.370136 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:40:00.370158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.370170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.370181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.370248 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:40:00.370260 | orchestrator | 2025-06-02 13:40:00.370271 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-02 13:40:00.370290 | orchestrator | Monday 02 June 2025 13:38:02 +0000 (0:00:00.641) 0:02:35.297 *********** 2025-06-02 13:40:00.370314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.370344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.370363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.370416 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:40:00.370438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.370473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.370503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.370556 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:40:00.370574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 13:40:00.370602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 13:40:00.370626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 13:40:00.370674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 13:40:00.370692 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:40:00.370710 | orchestrator | 2025-06-02 13:40:00.370729 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-02 13:40:00.370748 | orchestrator | Monday 02 June 2025 13:38:03 +0000 (0:00:00.831) 0:02:36.129 *********** 2025-06-02 13:40:00.370766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.370788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.370830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.370852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.370865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.370876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.370888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.370899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.370923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.370935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.370954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.370966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.370978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.370990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371018 | orchestrator | 2025-06-02 13:40:00.371029 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-02 13:40:00.371040 | orchestrator | Monday 02 June 2025 13:38:08 +0000 (0:00:05.149) 0:02:41.278 *********** 2025-06-02 13:40:00.371051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 13:40:00.371067 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 13:40:00.371079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 13:40:00.371089 | orchestrator | 2025-06-02 13:40:00.371100 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-02 13:40:00.371111 | orchestrator | Monday 02 June 2025 13:38:09 +0000 (0:00:01.544) 0:02:42.823 *********** 2025-06-02 13:40:00.371128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.371140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.371152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.371169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.371186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.371197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.371276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.371411 | orchestrator | 2025-06-02 13:40:00.371422 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-02 13:40:00.371433 | orchestrator | Monday 02 June 2025 13:38:25 +0000 (0:00:16.087) 0:02:58.910 *********** 2025-06-02 13:40:00.371444 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.371455 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.371466 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.371477 | orchestrator | 2025-06-02 13:40:00.371487 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-02 13:40:00.371498 | orchestrator | Monday 02 June 2025 13:38:27 +0000 (0:00:01.430) 0:03:00.341 *********** 2025-06-02 13:40:00.371509 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371520 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371530 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371541 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371552 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371562 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371573 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371583 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371594 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371604 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371615 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371625 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371636 | orchestrator | 2025-06-02 13:40:00.371646 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-02 13:40:00.371657 | orchestrator | Monday 02 June 2025 13:38:32 +0000 (0:00:05.361) 0:03:05.702 *********** 2025-06-02 13:40:00.371667 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371678 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371688 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371699 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371709 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371720 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371731 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371741 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371752 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371767 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371777 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371786 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371796 | orchestrator | 2025-06-02 13:40:00.371805 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-02 13:40:00.371815 | orchestrator | Monday 02 June 2025 13:38:37 +0000 (0:00:04.779) 0:03:10.481 *********** 2025-06-02 13:40:00.371824 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371834 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371843 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 13:40:00.371853 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371862 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371871 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 13:40:00.371891 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371906 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371916 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 13:40:00.371926 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371935 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371945 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 13:40:00.371954 | orchestrator | 2025-06-02 13:40:00.371963 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-02 13:40:00.371973 | orchestrator | Monday 02 June 2025 13:38:42 +0000 (0:00:05.116) 0:03:15.598 *********** 2025-06-02 13:40:00.371983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.371993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.372008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 13:40:00.372018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.372041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.372051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 13:40:00.372061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 13:40:00.372169 | orchestrator | 2025-06-02 13:40:00.372179 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 13:40:00.372189 | orchestrator | Monday 02 June 2025 13:38:46 +0000 (0:00:03.377) 0:03:18.976 *********** 2025-06-02 13:40:00.372198 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:40:00.372224 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:40:00.372235 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:40:00.372244 | orchestrator | 2025-06-02 13:40:00.372254 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-02 13:40:00.372263 | orchestrator | Monday 02 June 2025 13:38:46 +0000 (0:00:00.306) 0:03:19.282 *********** 2025-06-02 13:40:00.372273 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372282 | orchestrator | 2025-06-02 13:40:00.372292 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-02 13:40:00.372301 | orchestrator | Monday 02 June 2025 13:38:48 +0000 (0:00:01.920) 0:03:21.203 *********** 2025-06-02 13:40:00.372318 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372327 | orchestrator | 2025-06-02 13:40:00.372337 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-02 13:40:00.372346 | orchestrator | Monday 02 June 2025 13:38:50 +0000 (0:00:02.490) 0:03:23.694 *********** 2025-06-02 13:40:00.372356 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372365 | orchestrator | 2025-06-02 13:40:00.372375 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-02 13:40:00.372388 | orchestrator | Monday 02 June 2025 13:38:52 +0000 (0:00:02.088) 0:03:25.782 *********** 2025-06-02 13:40:00.372398 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372408 | orchestrator | 2025-06-02 13:40:00.372417 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-02 13:40:00.372427 | orchestrator | Monday 02 June 2025 13:38:54 +0000 (0:00:02.015) 0:03:27.797 *********** 2025-06-02 13:40:00.372436 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372445 | orchestrator | 2025-06-02 13:40:00.372455 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 13:40:00.372464 | orchestrator | Monday 02 June 2025 13:39:14 +0000 (0:00:19.324) 0:03:47.121 *********** 2025-06-02 13:40:00.372473 | orchestrator | 2025-06-02 13:40:00.372483 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 13:40:00.372492 | orchestrator | Monday 02 June 2025 13:39:14 +0000 (0:00:00.069) 0:03:47.191 *********** 2025-06-02 13:40:00.372502 | orchestrator | 2025-06-02 13:40:00.372512 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 13:40:00.372521 | orchestrator | Monday 02 June 2025 13:39:14 +0000 (0:00:00.063) 0:03:47.254 *********** 2025-06-02 13:40:00.372530 | orchestrator | 2025-06-02 13:40:00.372540 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-02 13:40:00.372555 | orchestrator | Monday 02 June 2025 13:39:14 +0000 (0:00:00.071) 0:03:47.325 *********** 2025-06-02 13:40:00.372565 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372575 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.372585 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.372594 | orchestrator | 2025-06-02 13:40:00.372604 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-02 13:40:00.372613 | orchestrator | Monday 02 June 2025 13:39:25 +0000 (0:00:11.190) 0:03:58.515 *********** 2025-06-02 13:40:00.372623 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372632 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.372641 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.372651 | orchestrator | 2025-06-02 13:40:00.372660 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-02 13:40:00.372670 | orchestrator | Monday 02 June 2025 13:39:31 +0000 (0:00:06.381) 0:04:04.897 *********** 2025-06-02 13:40:00.372680 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372689 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.372699 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.372708 | orchestrator | 2025-06-02 13:40:00.372718 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-02 13:40:00.372727 | orchestrator | Monday 02 June 2025 13:39:37 +0000 (0:00:05.472) 0:04:10.369 *********** 2025-06-02 13:40:00.372737 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372747 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.372756 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.372766 | orchestrator | 2025-06-02 13:40:00.372775 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-02 13:40:00.372785 | orchestrator | Monday 02 June 2025 13:39:47 +0000 (0:00:09.863) 0:04:20.232 *********** 2025-06-02 13:40:00.372794 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:40:00.372803 | orchestrator | changed: [testbed-node-2] 2025-06-02 13:40:00.372813 | orchestrator | changed: [testbed-node-1] 2025-06-02 13:40:00.372822 | orchestrator | 2025-06-02 13:40:00.372832 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:40:00.372848 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 13:40:00.372858 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:40:00.372868 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 13:40:00.372877 | orchestrator | 2025-06-02 13:40:00.372887 | orchestrator | 2025-06-02 13:40:00.372896 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:40:00.372906 | orchestrator | Monday 02 June 2025 13:39:57 +0000 (0:00:10.347) 0:04:30.580 *********** 2025-06-02 13:40:00.372916 | orchestrator | =============================================================================== 2025-06-02 13:40:00.372925 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.32s 2025-06-02 13:40:00.372935 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.14s 2025-06-02 13:40:00.372944 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.09s 2025-06-02 13:40:00.372954 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.52s 2025-06-02 13:40:00.372963 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.52s 2025-06-02 13:40:00.372973 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.19s 2025-06-02 13:40:00.372982 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.35s 2025-06-02 13:40:00.372992 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.86s 2025-06-02 13:40:00.373001 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.29s 2025-06-02 13:40:00.373010 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.47s 2025-06-02 13:40:00.373020 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.56s 2025-06-02 13:40:00.373029 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.46s 2025-06-02 13:40:00.373038 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.38s 2025-06-02 13:40:00.373048 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.47s 2025-06-02 13:40:00.373061 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.43s 2025-06-02 13:40:00.373071 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.36s 2025-06-02 13:40:00.373080 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.15s 2025-06-02 13:40:00.373090 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.12s 2025-06-02 13:40:00.373100 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.03s 2025-06-02 13:40:00.373109 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.90s 2025-06-02 13:40:00.373118 | orchestrator | 2025-06-02 13:40:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:03.410928 | orchestrator | 2025-06-02 13:40:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:06.458382 | orchestrator | 2025-06-02 13:40:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:09.501239 | orchestrator | 2025-06-02 13:40:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:12.538756 | orchestrator | 2025-06-02 13:40:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:15.585843 | orchestrator | 2025-06-02 13:40:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:18.633507 | orchestrator | 2025-06-02 13:40:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:21.676662 | orchestrator | 2025-06-02 13:40:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:24.719709 | orchestrator | 2025-06-02 13:40:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:27.760683 | orchestrator | 2025-06-02 13:40:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:30.799555 | orchestrator | 2025-06-02 13:40:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:33.847145 | orchestrator | 2025-06-02 13:40:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:36.882705 | orchestrator | 2025-06-02 13:40:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:39.928706 | orchestrator | 2025-06-02 13:40:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:42.965651 | orchestrator | 2025-06-02 13:40:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:46.006312 | orchestrator | 2025-06-02 13:40:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:49.048356 | orchestrator | 2025-06-02 13:40:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:52.093356 | orchestrator | 2025-06-02 13:40:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:55.135680 | orchestrator | 2025-06-02 13:40:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:40:58.178948 | orchestrator | 2025-06-02 13:40:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 13:41:01.223541 | orchestrator | 2025-06-02 13:41:01.512609 | orchestrator | 2025-06-02 13:41:01.516508 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Jun 2 13:41:01 UTC 2025 2025-06-02 13:41:01.516545 | orchestrator | 2025-06-02 13:41:01.968090 | orchestrator | ok: Runtime: 0:32:50.971212 2025-06-02 13:41:02.252413 | 2025-06-02 13:41:02.252554 | TASK [Bootstrap services] 2025-06-02 13:41:03.064335 | orchestrator | 2025-06-02 13:41:03.064523 | orchestrator | # BOOTSTRAP 2025-06-02 13:41:03.064548 | orchestrator | 2025-06-02 13:41:03.064562 | orchestrator | + set -e 2025-06-02 13:41:03.064576 | orchestrator | + echo 2025-06-02 13:41:03.064590 | orchestrator | + echo '# BOOTSTRAP' 2025-06-02 13:41:03.064608 | orchestrator | + echo 2025-06-02 13:41:03.064653 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-02 13:41:03.069561 | orchestrator | + set -e 2025-06-02 13:41:03.069598 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-02 13:41:05.127812 | orchestrator | 2025-06-02 13:41:05 | INFO  | It takes a moment until task afc6f853-7aea-4659-ad57-b2cbd6f16cdb (flavor-manager) has been started and output is visible here. 2025-06-02 13:41:08.967716 | orchestrator | 2025-06-02 13:41:08 | INFO  | Flavor SCS-1V-4 created 2025-06-02 13:41:09.188789 | orchestrator | 2025-06-02 13:41:09 | INFO  | Flavor SCS-2V-8 created 2025-06-02 13:41:09.389941 | orchestrator | 2025-06-02 13:41:09 | INFO  | Flavor SCS-4V-16 created 2025-06-02 13:41:09.579802 | orchestrator | 2025-06-02 13:41:09 | INFO  | Flavor SCS-8V-32 created 2025-06-02 13:41:09.729020 | orchestrator | 2025-06-02 13:41:09 | INFO  | Flavor SCS-1V-2 created 2025-06-02 13:41:09.868421 | orchestrator | 2025-06-02 13:41:09 | INFO  | Flavor SCS-2V-4 created 2025-06-02 13:41:10.008592 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-4V-8 created 2025-06-02 13:41:10.142347 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-8V-16 created 2025-06-02 13:41:10.284008 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-16V-32 created 2025-06-02 13:41:10.435718 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-1V-8 created 2025-06-02 13:41:10.582417 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-2V-16 created 2025-06-02 13:41:10.700620 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-4V-32 created 2025-06-02 13:41:10.833868 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-1L-1 created 2025-06-02 13:41:10.961463 | orchestrator | 2025-06-02 13:41:10 | INFO  | Flavor SCS-2V-4-20s created 2025-06-02 13:41:11.112637 | orchestrator | 2025-06-02 13:41:11 | INFO  | Flavor SCS-4V-16-100s created 2025-06-02 13:41:11.246660 | orchestrator | 2025-06-02 13:41:11 | INFO  | Flavor SCS-1V-4-10 created 2025-06-02 13:41:11.375750 | orchestrator | 2025-06-02 13:41:11 | INFO  | Flavor SCS-2V-8-20 created 2025-06-02 13:41:11.517995 | orchestrator | 2025-06-02 13:41:11 | INFO  | Flavor SCS-4V-16-50 created 2025-06-02 13:41:11.670348 | orchestrator | 2025-06-02 13:41:11 | INFO  | Flavor SCS-8V-32-100 created 2025-06-02 13:41:11.781074 | orchestrator | 2025-06-02 13:41:11 | INFO  | Flavor SCS-1V-2-5 created 2025-06-02 13:41:11.929885 | orchestrator | 2025-06-02 13:41:11 | INFO  | Flavor SCS-2V-4-10 created 2025-06-02 13:41:12.068959 | orchestrator | 2025-06-02 13:41:12 | INFO  | Flavor SCS-4V-8-20 created 2025-06-02 13:41:12.213246 | orchestrator | 2025-06-02 13:41:12 | INFO  | Flavor SCS-8V-16-50 created 2025-06-02 13:41:12.369426 | orchestrator | 2025-06-02 13:41:12 | INFO  | Flavor SCS-16V-32-100 created 2025-06-02 13:41:12.522079 | orchestrator | 2025-06-02 13:41:12 | INFO  | Flavor SCS-1V-8-20 created 2025-06-02 13:41:12.650675 | orchestrator | 2025-06-02 13:41:12 | INFO  | Flavor SCS-2V-16-50 created 2025-06-02 13:41:12.789516 | orchestrator | 2025-06-02 13:41:12 | INFO  | Flavor SCS-4V-32-100 created 2025-06-02 13:41:12.925092 | orchestrator | 2025-06-02 13:41:12 | INFO  | Flavor SCS-1L-1-5 created 2025-06-02 13:41:15.224329 | orchestrator | 2025-06-02 13:41:15 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-02 13:41:15.228801 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:41:15.229523 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:41:15.229587 | orchestrator | Registering Redlock._release_script 2025-06-02 13:41:15.288649 | orchestrator | 2025-06-02 13:41:15 | INFO  | Task 1f107705-8ebe-4aec-8ce8-08e589b08948 (bootstrap-basic) was prepared for execution. 2025-06-02 13:41:15.288721 | orchestrator | 2025-06-02 13:41:15 | INFO  | It takes a moment until task 1f107705-8ebe-4aec-8ce8-08e589b08948 (bootstrap-basic) has been started and output is visible here. 2025-06-02 13:41:19.407371 | orchestrator | 2025-06-02 13:41:19.407546 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-02 13:41:19.409950 | orchestrator | 2025-06-02 13:41:19.409976 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 13:41:19.410289 | orchestrator | Monday 02 June 2025 13:41:19 +0000 (0:00:00.073) 0:00:00.073 *********** 2025-06-02 13:41:21.242464 | orchestrator | ok: [localhost] 2025-06-02 13:41:21.242574 | orchestrator | 2025-06-02 13:41:21.242761 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-02 13:41:21.243210 | orchestrator | Monday 02 June 2025 13:41:21 +0000 (0:00:01.835) 0:00:01.908 *********** 2025-06-02 13:41:30.348416 | orchestrator | ok: [localhost] 2025-06-02 13:41:30.349133 | orchestrator | 2025-06-02 13:41:30.349460 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-02 13:41:30.350387 | orchestrator | Monday 02 June 2025 13:41:30 +0000 (0:00:09.107) 0:00:11.016 *********** 2025-06-02 13:41:37.296749 | orchestrator | changed: [localhost] 2025-06-02 13:41:37.296857 | orchestrator | 2025-06-02 13:41:37.297628 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-02 13:41:37.298991 | orchestrator | Monday 02 June 2025 13:41:37 +0000 (0:00:06.947) 0:00:17.963 *********** 2025-06-02 13:41:43.961953 | orchestrator | ok: [localhost] 2025-06-02 13:41:43.962760 | orchestrator | 2025-06-02 13:41:43.963469 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-02 13:41:43.964671 | orchestrator | Monday 02 June 2025 13:41:43 +0000 (0:00:06.666) 0:00:24.630 *********** 2025-06-02 13:41:50.877215 | orchestrator | changed: [localhost] 2025-06-02 13:41:50.878438 | orchestrator | 2025-06-02 13:41:50.879624 | orchestrator | TASK [Create public network] *************************************************** 2025-06-02 13:41:50.880959 | orchestrator | Monday 02 June 2025 13:41:50 +0000 (0:00:06.914) 0:00:31.544 *********** 2025-06-02 13:41:55.834505 | orchestrator | changed: [localhost] 2025-06-02 13:41:55.836239 | orchestrator | 2025-06-02 13:41:55.836553 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-02 13:41:55.838732 | orchestrator | Monday 02 June 2025 13:41:55 +0000 (0:00:04.955) 0:00:36.500 *********** 2025-06-02 13:42:01.718269 | orchestrator | changed: [localhost] 2025-06-02 13:42:01.719963 | orchestrator | 2025-06-02 13:42:01.722400 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-02 13:42:01.723367 | orchestrator | Monday 02 June 2025 13:42:01 +0000 (0:00:05.885) 0:00:42.386 *********** 2025-06-02 13:42:06.004065 | orchestrator | changed: [localhost] 2025-06-02 13:42:06.004201 | orchestrator | 2025-06-02 13:42:06.004694 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-02 13:42:06.005771 | orchestrator | Monday 02 June 2025 13:42:05 +0000 (0:00:04.284) 0:00:46.671 *********** 2025-06-02 13:42:09.741647 | orchestrator | changed: [localhost] 2025-06-02 13:42:09.741924 | orchestrator | 2025-06-02 13:42:09.741955 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-02 13:42:09.742288 | orchestrator | Monday 02 June 2025 13:42:09 +0000 (0:00:03.737) 0:00:50.408 *********** 2025-06-02 13:42:13.269825 | orchestrator | ok: [localhost] 2025-06-02 13:42:13.269966 | orchestrator | 2025-06-02 13:42:13.270086 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:42:13.270643 | orchestrator | 2025-06-02 13:42:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:42:13.271353 | orchestrator | 2025-06-02 13:42:13 | INFO  | Please wait and do not abort execution. 2025-06-02 13:42:13.272283 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:42:13.273611 | orchestrator | 2025-06-02 13:42:13.274101 | orchestrator | 2025-06-02 13:42:13.274954 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:42:13.275382 | orchestrator | Monday 02 June 2025 13:42:13 +0000 (0:00:03.525) 0:00:53.934 *********** 2025-06-02 13:42:13.275737 | orchestrator | =============================================================================== 2025-06-02 13:42:13.276241 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.11s 2025-06-02 13:42:13.277437 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.95s 2025-06-02 13:42:13.278533 | orchestrator | Create volume type local ------------------------------------------------ 6.91s 2025-06-02 13:42:13.279023 | orchestrator | Get volume type local --------------------------------------------------- 6.67s 2025-06-02 13:42:13.280216 | orchestrator | Set public network to default ------------------------------------------- 5.89s 2025-06-02 13:42:13.281601 | orchestrator | Create public network --------------------------------------------------- 4.96s 2025-06-02 13:42:13.282097 | orchestrator | Create public subnet ---------------------------------------------------- 4.28s 2025-06-02 13:42:13.282236 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.74s 2025-06-02 13:42:13.283348 | orchestrator | Create manager role ----------------------------------------------------- 3.53s 2025-06-02 13:42:13.283372 | orchestrator | Gathering Facts --------------------------------------------------------- 1.84s 2025-06-02 13:42:15.733431 | orchestrator | 2025-06-02 13:42:15 | INFO  | It takes a moment until task b592e51b-b059-4e46-a084-b5e68ed4cdbb (image-manager) has been started and output is visible here. 2025-06-02 13:42:19.256909 | orchestrator | 2025-06-02 13:42:19 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-02 13:42:19.475751 | orchestrator | 2025-06-02 13:42:19 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-02 13:42:19.477422 | orchestrator | 2025-06-02 13:42:19 | INFO  | Importing image Cirros 0.6.2 2025-06-02 13:42:19.478645 | orchestrator | 2025-06-02 13:42:19 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 13:42:21.111521 | orchestrator | 2025-06-02 13:42:21 | INFO  | Waiting for image to leave queued state... 2025-06-02 13:42:23.161108 | orchestrator | 2025-06-02 13:42:23 | INFO  | Waiting for import to complete... 2025-06-02 13:42:33.259379 | orchestrator | 2025-06-02 13:42:33 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-02 13:42:33.460891 | orchestrator | 2025-06-02 13:42:33 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-02 13:42:33.461308 | orchestrator | 2025-06-02 13:42:33 | INFO  | Setting internal_version = 0.6.2 2025-06-02 13:42:33.462089 | orchestrator | 2025-06-02 13:42:33 | INFO  | Setting image_original_user = cirros 2025-06-02 13:42:33.463406 | orchestrator | 2025-06-02 13:42:33 | INFO  | Adding tag os:cirros 2025-06-02 13:42:33.705713 | orchestrator | 2025-06-02 13:42:33 | INFO  | Setting property architecture: x86_64 2025-06-02 13:42:33.977626 | orchestrator | 2025-06-02 13:42:33 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 13:42:34.190914 | orchestrator | 2025-06-02 13:42:34 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 13:42:34.403600 | orchestrator | 2025-06-02 13:42:34 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 13:42:34.590843 | orchestrator | 2025-06-02 13:42:34 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 13:42:34.773627 | orchestrator | 2025-06-02 13:42:34 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 13:42:34.975839 | orchestrator | 2025-06-02 13:42:34 | INFO  | Setting property os_distro: cirros 2025-06-02 13:42:35.174109 | orchestrator | 2025-06-02 13:42:35 | INFO  | Setting property replace_frequency: never 2025-06-02 13:42:35.385325 | orchestrator | 2025-06-02 13:42:35 | INFO  | Setting property uuid_validity: none 2025-06-02 13:42:35.624278 | orchestrator | 2025-06-02 13:42:35 | INFO  | Setting property provided_until: none 2025-06-02 13:42:35.794541 | orchestrator | 2025-06-02 13:42:35 | INFO  | Setting property image_description: Cirros 2025-06-02 13:42:35.998742 | orchestrator | 2025-06-02 13:42:35 | INFO  | Setting property image_name: Cirros 2025-06-02 13:42:36.215299 | orchestrator | 2025-06-02 13:42:36 | INFO  | Setting property internal_version: 0.6.2 2025-06-02 13:42:36.412331 | orchestrator | 2025-06-02 13:42:36 | INFO  | Setting property image_original_user: cirros 2025-06-02 13:42:36.610638 | orchestrator | 2025-06-02 13:42:36 | INFO  | Setting property os_version: 0.6.2 2025-06-02 13:42:36.794080 | orchestrator | 2025-06-02 13:42:36 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 13:42:37.011615 | orchestrator | 2025-06-02 13:42:37 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-02 13:42:37.207842 | orchestrator | 2025-06-02 13:42:37 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-02 13:42:37.208573 | orchestrator | 2025-06-02 13:42:37 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-02 13:42:37.209990 | orchestrator | 2025-06-02 13:42:37 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-02 13:42:37.423611 | orchestrator | 2025-06-02 13:42:37 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-02 13:42:37.653296 | orchestrator | 2025-06-02 13:42:37 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-02 13:42:37.653394 | orchestrator | 2025-06-02 13:42:37 | INFO  | Importing image Cirros 0.6.3 2025-06-02 13:42:37.653409 | orchestrator | 2025-06-02 13:42:37 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 13:42:38.728550 | orchestrator | 2025-06-02 13:42:38 | INFO  | Waiting for image to leave queued state... 2025-06-02 13:42:40.804320 | orchestrator | 2025-06-02 13:42:40 | INFO  | Waiting for import to complete... 2025-06-02 13:42:51.103023 | orchestrator | 2025-06-02 13:42:51 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-02 13:42:51.670883 | orchestrator | 2025-06-02 13:42:51 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-02 13:42:51.672063 | orchestrator | 2025-06-02 13:42:51 | INFO  | Setting internal_version = 0.6.3 2025-06-02 13:42:51.672316 | orchestrator | 2025-06-02 13:42:51 | INFO  | Setting image_original_user = cirros 2025-06-02 13:42:51.673387 | orchestrator | 2025-06-02 13:42:51 | INFO  | Adding tag os:cirros 2025-06-02 13:42:51.955318 | orchestrator | 2025-06-02 13:42:51 | INFO  | Setting property architecture: x86_64 2025-06-02 13:42:52.231871 | orchestrator | 2025-06-02 13:42:52 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 13:42:52.507479 | orchestrator | 2025-06-02 13:42:52 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 13:42:52.689852 | orchestrator | 2025-06-02 13:42:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 13:42:52.889107 | orchestrator | 2025-06-02 13:42:52 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 13:42:53.090252 | orchestrator | 2025-06-02 13:42:53 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 13:42:53.257790 | orchestrator | 2025-06-02 13:42:53 | INFO  | Setting property os_distro: cirros 2025-06-02 13:42:53.430187 | orchestrator | 2025-06-02 13:42:53 | INFO  | Setting property replace_frequency: never 2025-06-02 13:42:53.652525 | orchestrator | 2025-06-02 13:42:53 | INFO  | Setting property uuid_validity: none 2025-06-02 13:42:53.887629 | orchestrator | 2025-06-02 13:42:53 | INFO  | Setting property provided_until: none 2025-06-02 13:42:54.091573 | orchestrator | 2025-06-02 13:42:54 | INFO  | Setting property image_description: Cirros 2025-06-02 13:42:54.288382 | orchestrator | 2025-06-02 13:42:54 | INFO  | Setting property image_name: Cirros 2025-06-02 13:42:54.492082 | orchestrator | 2025-06-02 13:42:54 | INFO  | Setting property internal_version: 0.6.3 2025-06-02 13:42:54.697936 | orchestrator | 2025-06-02 13:42:54 | INFO  | Setting property image_original_user: cirros 2025-06-02 13:42:54.889694 | orchestrator | 2025-06-02 13:42:54 | INFO  | Setting property os_version: 0.6.3 2025-06-02 13:42:55.095585 | orchestrator | 2025-06-02 13:42:55 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 13:42:55.290209 | orchestrator | 2025-06-02 13:42:55 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-02 13:42:55.515641 | orchestrator | 2025-06-02 13:42:55 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-02 13:42:55.515947 | orchestrator | 2025-06-02 13:42:55 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-02 13:42:55.518214 | orchestrator | 2025-06-02 13:42:55 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-02 13:42:56.497120 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-02 13:42:58.479068 | orchestrator | 2025-06-02 13:42:58 | INFO  | date: 2025-06-02 2025-06-02 13:42:58.479156 | orchestrator | 2025-06-02 13:42:58 | INFO  | image: octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 13:42:58.479304 | orchestrator | 2025-06-02 13:42:58 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 13:42:58.479333 | orchestrator | 2025-06-02 13:42:58 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2.CHECKSUM 2025-06-02 13:42:58.526625 | orchestrator | 2025-06-02 13:42:58 | INFO  | checksum: 4244ae669e0302e4de8dd880cdee4c27c232e9d393dd18f3521b5d0e7c284b7c 2025-06-02 13:42:58.602275 | orchestrator | 2025-06-02 13:42:58 | INFO  | It takes a moment until task 4af28ed9-735c-446b-bc09-2d7d0a546dcf (image-manager) has been started and output is visible here. 2025-06-02 13:42:58.857715 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-02 13:42:58.857960 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-02 13:43:01.035559 | orchestrator | 2025-06-02 13:43:01 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 13:43:01.054010 | orchestrator | 2025-06-02 13:43:01 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2: 200 2025-06-02 13:43:01.054256 | orchestrator | 2025-06-02 13:43:01 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-02 2025-06-02 13:43:01.055024 | orchestrator | 2025-06-02 13:43:01 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 13:43:02.122370 | orchestrator | 2025-06-02 13:43:02 | INFO  | Waiting for image to leave queued state... 2025-06-02 13:43:04.168967 | orchestrator | 2025-06-02 13:43:04 | INFO  | Waiting for import to complete... 2025-06-02 13:43:14.259744 | orchestrator | 2025-06-02 13:43:14 | INFO  | Waiting for import to complete... 2025-06-02 13:43:24.348817 | orchestrator | 2025-06-02 13:43:24 | INFO  | Waiting for import to complete... 2025-06-02 13:43:34.447096 | orchestrator | 2025-06-02 13:43:34 | INFO  | Waiting for import to complete... 2025-06-02 13:43:44.527107 | orchestrator | 2025-06-02 13:43:44 | INFO  | Waiting for import to complete... 2025-06-02 13:43:54.648984 | orchestrator | 2025-06-02 13:43:54 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-02' successfully completed, reloading images 2025-06-02 13:43:54.990129 | orchestrator | 2025-06-02 13:43:54 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 13:43:54.990518 | orchestrator | 2025-06-02 13:43:54 | INFO  | Setting internal_version = 2025-06-02 2025-06-02 13:43:54.991428 | orchestrator | 2025-06-02 13:43:54 | INFO  | Setting image_original_user = ubuntu 2025-06-02 13:43:54.992572 | orchestrator | 2025-06-02 13:43:54 | INFO  | Adding tag amphora 2025-06-02 13:43:55.178985 | orchestrator | 2025-06-02 13:43:55 | INFO  | Adding tag os:ubuntu 2025-06-02 13:43:55.364640 | orchestrator | 2025-06-02 13:43:55 | INFO  | Setting property architecture: x86_64 2025-06-02 13:43:55.569562 | orchestrator | 2025-06-02 13:43:55 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 13:43:55.729435 | orchestrator | 2025-06-02 13:43:55 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 13:43:55.912783 | orchestrator | 2025-06-02 13:43:55 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 13:43:56.101840 | orchestrator | 2025-06-02 13:43:56 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 13:43:56.288094 | orchestrator | 2025-06-02 13:43:56 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 13:43:56.670141 | orchestrator | 2025-06-02 13:43:56 | INFO  | Setting property os_distro: ubuntu 2025-06-02 13:43:56.858098 | orchestrator | 2025-06-02 13:43:56 | INFO  | Setting property replace_frequency: quarterly 2025-06-02 13:43:57.081670 | orchestrator | 2025-06-02 13:43:57 | INFO  | Setting property uuid_validity: last-1 2025-06-02 13:43:57.273406 | orchestrator | 2025-06-02 13:43:57 | INFO  | Setting property provided_until: none 2025-06-02 13:43:57.438573 | orchestrator | 2025-06-02 13:43:57 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-02 13:43:57.665080 | orchestrator | 2025-06-02 13:43:57 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-02 13:43:57.893357 | orchestrator | 2025-06-02 13:43:57 | INFO  | Setting property internal_version: 2025-06-02 2025-06-02 13:43:58.098875 | orchestrator | 2025-06-02 13:43:58 | INFO  | Setting property image_original_user: ubuntu 2025-06-02 13:43:58.278145 | orchestrator | 2025-06-02 13:43:58 | INFO  | Setting property os_version: 2025-06-02 2025-06-02 13:43:58.478153 | orchestrator | 2025-06-02 13:43:58 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 13:43:58.671457 | orchestrator | 2025-06-02 13:43:58 | INFO  | Setting property image_build_date: 2025-06-02 2025-06-02 13:43:58.884539 | orchestrator | 2025-06-02 13:43:58 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 13:43:58.885334 | orchestrator | 2025-06-02 13:43:58 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 13:43:59.076908 | orchestrator | 2025-06-02 13:43:59 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-02 13:43:59.077092 | orchestrator | 2025-06-02 13:43:59 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-02 13:43:59.077973 | orchestrator | 2025-06-02 13:43:59 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-02 13:43:59.078635 | orchestrator | 2025-06-02 13:43:59 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-02 13:43:59.919623 | orchestrator | ok: Runtime: 0:02:56.855822 2025-06-02 13:43:59.986767 | 2025-06-02 13:43:59.986970 | TASK [Run checks] 2025-06-02 13:44:00.719632 | orchestrator | + set -e 2025-06-02 13:44:00.719825 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 13:44:00.719852 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 13:44:00.719873 | orchestrator | ++ INTERACTIVE=false 2025-06-02 13:44:00.719887 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 13:44:00.719900 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 13:44:00.719927 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 13:44:00.721108 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 13:44:00.727380 | orchestrator | 2025-06-02 13:44:00.727429 | orchestrator | # CHECK 2025-06-02 13:44:00.727442 | orchestrator | 2025-06-02 13:44:00.727453 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 13:44:00.727474 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 13:44:00.727495 | orchestrator | + echo 2025-06-02 13:44:00.727515 | orchestrator | + echo '# CHECK' 2025-06-02 13:44:00.727535 | orchestrator | + echo 2025-06-02 13:44:00.727558 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 13:44:00.728385 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 13:44:00.792269 | orchestrator | 2025-06-02 13:44:00.792360 | orchestrator | ## Containers @ testbed-manager 2025-06-02 13:44:00.792373 | orchestrator | 2025-06-02 13:44:00.792387 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 13:44:00.792398 | orchestrator | + echo 2025-06-02 13:44:00.792409 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-02 13:44:00.792421 | orchestrator | + echo 2025-06-02 13:44:00.792433 | orchestrator | + osism container testbed-manager ps 2025-06-02 13:44:02.911909 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 13:44:02.912041 | orchestrator | ee0feab2c6a4 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-06-02 13:44:02.912066 | orchestrator | f3df56aad7f8 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-06-02 13:44:02.912086 | orchestrator | 10ec772e2dff registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 13:44:02.912098 | orchestrator | ff7c31a33c9f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 13:44:02.912109 | orchestrator | b26fdcb968ed registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-06-02 13:44:02.912121 | orchestrator | 28776f24d9fc registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2025-06-02 13:44:02.912137 | orchestrator | 4e419bbf9ba3 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 13:44:02.912149 | orchestrator | 04051aed9bc6 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-02 13:44:02.912160 | orchestrator | 93293d091431 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2025-06-02 13:44:02.912224 | orchestrator | 78c837fbcabe phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 28 minutes ago Up 28 minutes (healthy) 80/tcp phpmyadmin 2025-06-02 13:44:02.912239 | orchestrator | 344bfa152bd7 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 29 minutes ago Up 29 minutes openstackclient 2025-06-02 13:44:02.912250 | orchestrator | 8fb550264c23 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 29 minutes ago Up 29 minutes (healthy) 8080/tcp homer 2025-06-02 13:44:02.912262 | orchestrator | 408f509a2094 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 48 minutes ago Up 48 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-02 13:44:02.912278 | orchestrator | 9d23f51d44f5 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 52 minutes ago Up 50 minutes (healthy) manager-inventory_reconciler-1 2025-06-02 13:44:02.912310 | orchestrator | 48e0c0ac6355 registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 52 minutes ago Up 51 minutes (healthy) ceph-ansible 2025-06-02 13:44:02.912322 | orchestrator | 3fbd36025b3f registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 52 minutes ago Up 51 minutes (healthy) osism-kubernetes 2025-06-02 13:44:02.912333 | orchestrator | 0afb992d3908 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 52 minutes ago Up 51 minutes (healthy) osism-ansible 2025-06-02 13:44:02.912344 | orchestrator | df7b9b254471 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 52 minutes ago Up 51 minutes (healthy) kolla-ansible 2025-06-02 13:44:02.912355 | orchestrator | 601d97d770d6 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 52 minutes ago Up 51 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-02 13:44:02.912366 | orchestrator | 2420e3e681b7 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-openstack-1 2025-06-02 13:44:02.912378 | orchestrator | 133f5a4aa6d6 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-02 13:44:02.912389 | orchestrator | 64455b220913 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 52 minutes ago Up 52 minutes (healthy) 6379/tcp manager-redis-1 2025-06-02 13:44:02.912400 | orchestrator | 9e864efacc7e registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 52 minutes ago Up 52 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-02 13:44:02.912419 | orchestrator | 28fd86ebc870 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-watchdog-1 2025-06-02 13:44:02.912430 | orchestrator | 059844ab450a registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-listener-1 2025-06-02 13:44:02.912442 | orchestrator | 393029bce413 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 52 minutes ago Up 52 minutes (healthy) osismclient 2025-06-02 13:44:02.912453 | orchestrator | 7ee9cc62d467 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-beat-1 2025-06-02 13:44:02.912464 | orchestrator | b30bd3cc9944 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 52 minutes ago Up 52 minutes (healthy) manager-flower-1 2025-06-02 13:44:02.912475 | orchestrator | fdd8c6ce5d69 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-02 13:44:03.165709 | orchestrator | 2025-06-02 13:44:03.165859 | orchestrator | ## Images @ testbed-manager 2025-06-02 13:44:03.165909 | orchestrator | 2025-06-02 13:44:03.165923 | orchestrator | + echo 2025-06-02 13:44:03.165962 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-02 13:44:03.165976 | orchestrator | + echo 2025-06-02 13:44:03.165987 | orchestrator | + osism container testbed-manager images 2025-06-02 13:44:05.285980 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 13:44:05.286125 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e73e0506845d 10 hours ago 11.5MB 2025-06-02 13:44:05.286143 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 86ee4afc8387 10 hours ago 225MB 2025-06-02 13:44:05.286176 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 73cd5a0acb2a 41 hours ago 574MB 2025-06-02 13:44:05.286215 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 41 hours ago 578MB 2025-06-02 13:44:05.286226 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 13:44:05.286237 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 13:44:05.286248 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 13:44:05.286258 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 2 days ago 892MB 2025-06-02 13:44:05.286269 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 2 days ago 361MB 2025-06-02 13:44:05.286280 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 13:44:05.286291 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 13:44:05.286326 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 2 days ago 457MB 2025-06-02 13:44:05.286338 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 2 days ago 538MB 2025-06-02 13:44:05.286349 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 2 days ago 1.21GB 2025-06-02 13:44:05.286360 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 2 days ago 308MB 2025-06-02 13:44:05.286370 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 3 days ago 297MB 2025-06-02 13:44:05.286381 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 3 days ago 41.4MB 2025-06-02 13:44:05.286392 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 6 days ago 224MB 2025-06-02 13:44:05.286403 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 weeks ago 453MB 2025-06-02 13:44:05.286413 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-02 13:44:05.286424 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-02 13:44:05.286434 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-02 13:44:05.286445 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-02 13:44:05.555975 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 13:44:05.556140 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 13:44:05.610922 | orchestrator | 2025-06-02 13:44:05.610979 | orchestrator | ## Containers @ testbed-node-0 2025-06-02 13:44:05.610993 | orchestrator | 2025-06-02 13:44:05.611006 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 13:44:05.611017 | orchestrator | + echo 2025-06-02 13:44:05.611029 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-02 13:44:05.611041 | orchestrator | + echo 2025-06-02 13:44:05.611052 | orchestrator | + osism container testbed-node-0 ps 2025-06-02 13:44:07.826143 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 13:44:07.826276 | orchestrator | a911a9b79892 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 13:44:07.826294 | orchestrator | 3b99ae096bb1 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 13:44:07.826306 | orchestrator | f16c66cab737 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 13:44:07.826317 | orchestrator | 396aefb15781 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 13:44:07.826328 | orchestrator | c849a5bb9827 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-02 13:44:07.826357 | orchestrator | 717382cde3f9 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 13:44:07.826368 | orchestrator | 312277607ffd registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 13:44:07.826399 | orchestrator | f8e86bc51795 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 13:44:07.826410 | orchestrator | 141328f136ec registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 13:44:07.826421 | orchestrator | 3100dfd30f8e registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 13:44:07.826431 | orchestrator | 5287ffd86ff6 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 13:44:07.826442 | orchestrator | 1135e4cd629b registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 13:44:07.826453 | orchestrator | a46dac60123f registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 13:44:07.826464 | orchestrator | e245c12bcae0 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 13:44:07.826475 | orchestrator | 91470995c447 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2025-06-02 13:44:07.826486 | orchestrator | 14d2e263863a registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 13:44:07.826496 | orchestrator | c8a991f89200 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-02 13:44:07.826507 | orchestrator | c2f4ef46059f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 13:44:07.826518 | orchestrator | a7855fe258d9 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 13:44:07.826548 | orchestrator | 05a554d189a5 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 13:44:07.826559 | orchestrator | 68efb091f19b registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 13:44:07.826570 | orchestrator | 402f51b81171 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 13:44:07.826581 | orchestrator | f24e6c92668c registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-02 13:44:07.826592 | orchestrator | 50588b8cdea7 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-02 13:44:07.826610 | orchestrator | 9ca1cec539dd registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 13:44:07.826629 | orchestrator | 57818ea75499 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 13:44:07.826640 | orchestrator | ba9da5c1e341 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 13:44:07.826651 | orchestrator | b3a29676b9af registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-02 13:44:07.826669 | orchestrator | f28a7a972566 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 13:44:07.826685 | orchestrator | adaa2af487e3 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 13:44:07.826697 | orchestrator | 0ad28e03aca0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 13:44:07.826708 | orchestrator | 3935fdcf9ec5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-06-02 13:44:07.826719 | orchestrator | 0884e4b9631e registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-02 13:44:07.826730 | orchestrator | dc1a6238d298 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 13:44:07.826741 | orchestrator | 5a6b7fd0abad registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-02 13:44:07.826756 | orchestrator | 40912a28a129 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (unhealthy) horizon 2025-06-02 13:44:07.826768 | orchestrator | ebb10eca1371 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-02 13:44:07.826779 | orchestrator | 78d912b2ce7c registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 13:44:07.826790 | orchestrator | 3d160e4309f0 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-02 13:44:07.826800 | orchestrator | 292167a1b9eb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2025-06-02 13:44:07.826818 | orchestrator | 59075ee816ef registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-02 13:44:07.826829 | orchestrator | 2ce36621bb12 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-02 13:44:07.826840 | orchestrator | bc3c091759f9 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-02 13:44:07.826858 | orchestrator | db5a0de25d7d registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-06-02 13:44:07.826869 | orchestrator | a9cf76a65308 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-02 13:44:07.826880 | orchestrator | cad9857f88f8 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-02 13:44:07.826891 | orchestrator | 3f508632e81c registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-06-02 13:44:07.826903 | orchestrator | 09553a5d9c69 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-06-02 13:44:07.826914 | orchestrator | 2ef80e0ff4a6 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-02 13:44:07.826925 | orchestrator | 026cfc27f540 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-02 13:44:07.826936 | orchestrator | 7739e6e6f876 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-02 13:44:07.826947 | orchestrator | 6de13bba08d2 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-02 13:44:07.826957 | orchestrator | e42d9c08b897 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-02 13:44:07.826968 | orchestrator | 4330b5101034 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-06-02 13:44:07.826979 | orchestrator | 50c406af8b2a registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 13:44:07.826990 | orchestrator | 70ca14205bc3 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-02 13:44:07.827001 | orchestrator | a8b9cb3c2e96 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-02 13:44:08.091958 | orchestrator | 2025-06-02 13:44:08.092089 | orchestrator | ## Images @ testbed-node-0 2025-06-02 13:44:08.092113 | orchestrator | 2025-06-02 13:44:08.092133 | orchestrator | + echo 2025-06-02 13:44:08.092152 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-02 13:44:08.092172 | orchestrator | + echo 2025-06-02 13:44:08.092219 | orchestrator | + osism container testbed-node-0 images 2025-06-02 13:44:10.187042 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 13:44:10.187153 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 13:44:10.187168 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 13:44:10.187228 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 13:44:10.187269 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 13:44:10.187281 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 13:44:10.187292 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 13:44:10.187302 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 13:44:10.187331 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 13:44:10.187342 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 13:44:10.187352 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 13:44:10.187363 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 13:44:10.187374 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 13:44:10.187385 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 13:44:10.187396 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 13:44:10.187407 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 13:44:10.187417 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 13:44:10.187428 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 13:44:10.187438 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 13:44:10.187448 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 13:44:10.187459 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 13:44:10.187470 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 13:44:10.187480 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 13:44:10.187491 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 13:44:10.187501 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 13:44:10.187512 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 13:44:10.187522 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 2 days ago 1.04GB 2025-06-02 13:44:10.187533 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 2 days ago 1.04GB 2025-06-02 13:44:10.187543 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 2 days ago 1.04GB 2025-06-02 13:44:10.187554 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 2 days ago 1.04GB 2025-06-02 13:44:10.187564 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 13:44:10.187582 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 13:44:10.187679 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 13:44:10.187700 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 13:44:10.187712 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 13:44:10.187722 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 13:44:10.187733 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 13:44:10.187743 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 13:44:10.187754 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 13:44:10.187765 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 13:44:10.187775 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 13:44:10.187786 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 13:44:10.187797 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 13:44:10.187807 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 13:44:10.187818 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 13:44:10.187829 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 2 days ago 1.04GB 2025-06-02 13:44:10.187840 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 2 days ago 1.04GB 2025-06-02 13:44:10.187859 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 13:44:10.187878 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 13:44:10.187898 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 13:44:10.187916 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 13:44:10.187932 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 13:44:10.187943 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 13:44:10.187954 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 13:44:10.187964 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 13:44:10.187975 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 13:44:10.187985 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 13:44:10.188006 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 2 days ago 1.11GB 2025-06-02 13:44:10.188017 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 2 days ago 1.12GB 2025-06-02 13:44:10.188027 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 13:44:10.188038 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 13:44:10.188054 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 13:44:10.188064 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 13:44:10.188075 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 13:44:10.440800 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 13:44:10.442086 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 13:44:10.485916 | orchestrator | 2025-06-02 13:44:10.486064 | orchestrator | ## Containers @ testbed-node-1 2025-06-02 13:44:10.486083 | orchestrator | 2025-06-02 13:44:10.486095 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 13:44:10.486106 | orchestrator | + echo 2025-06-02 13:44:10.486119 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-02 13:44:10.486130 | orchestrator | + echo 2025-06-02 13:44:10.486141 | orchestrator | + osism container testbed-node-1 ps 2025-06-02 13:44:12.629290 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 13:44:12.629415 | orchestrator | d96b81aa5359 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 13:44:12.629433 | orchestrator | be9a8886b5e3 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 13:44:12.629445 | orchestrator | d4231d2966ce registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 13:44:12.629456 | orchestrator | 3afdb157b4f1 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 13:44:12.629467 | orchestrator | 86ac6bfd03da registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-02 13:44:12.629478 | orchestrator | 0d09afffd070 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-06-02 13:44:12.629489 | orchestrator | 478ffbf40931 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 13:44:12.629500 | orchestrator | b9ab8b149e9f registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 13:44:12.629511 | orchestrator | 0a062d4f524a registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 13:44:12.629521 | orchestrator | d40d098115cb registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 13:44:12.629553 | orchestrator | 9a26deead259 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 13:44:12.629564 | orchestrator | a046b5d39f76 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 13:44:12.629575 | orchestrator | e7af63545bd1 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 13:44:12.629586 | orchestrator | a6f34c20fb74 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 13:44:12.629597 | orchestrator | f5237f7971ef registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 13:44:12.629608 | orchestrator | 7a1fae0ecada registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 13:44:12.629636 | orchestrator | 35bb7bd46e38 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-02 13:44:12.629653 | orchestrator | 46ad949493f5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 13:44:12.629665 | orchestrator | 17dbbd42da93 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 13:44:12.629695 | orchestrator | 65e248834160 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 13:44:12.629706 | orchestrator | a1e3a3268272 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 13:44:12.629717 | orchestrator | 379e6ada03ec registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 13:44:12.629729 | orchestrator | e47682165cbe registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 13:44:12.629740 | orchestrator | 1ead0cfcb575 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 13:44:12.629751 | orchestrator | eb8f00311bdd registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-02 13:44:12.629764 | orchestrator | ae8c9b3823d3 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 13:44:12.629775 | orchestrator | b6c7e3565015 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 13:44:12.629787 | orchestrator | f231390c4bc2 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 13:44:12.629808 | orchestrator | 47c860267067 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 13:44:12.629822 | orchestrator | ebd08e701824 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 13:44:12.629834 | orchestrator | 25b53720ca9f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 13:44:12.629847 | orchestrator | ec0e12f95e3d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-06-02 13:44:12.629859 | orchestrator | 9fa04e9900d9 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-02 13:44:12.629871 | orchestrator | 5d5f8bef86a0 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 13:44:12.629884 | orchestrator | becaf4ab1542 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (unhealthy) horizon 2025-06-02 13:44:12.629897 | orchestrator | 517cdc7a9e29 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-02 13:44:12.629910 | orchestrator | a228ba3129dd registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-02 13:44:12.629922 | orchestrator | 0d45dfb9acbc registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-02 13:44:12.629934 | orchestrator | 8862814bf9ec registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-02 13:44:12.629952 | orchestrator | 4286ab8f0d80 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2025-06-02 13:44:12.629973 | orchestrator | ee1c7bb08220 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-02 13:44:12.629986 | orchestrator | e55d60264644 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-02 13:44:12.629998 | orchestrator | fba3cdaf02b8 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-02 13:44:12.630010 | orchestrator | f2a3e0bed043 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-06-02 13:44:12.630085 | orchestrator | 66399efc840e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-06-02 13:44:12.630097 | orchestrator | e703241c3c4a registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-06-02 13:44:12.630110 | orchestrator | bf2eb6c24158 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2025-06-02 13:44:12.630138 | orchestrator | 49bf7b5b8bf2 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-02 13:44:12.630151 | orchestrator | 662c67a726da registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-06-02 13:44:12.630163 | orchestrator | 74d65a0f45fb registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-02 13:44:12.630174 | orchestrator | a32133080cb1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-02 13:44:12.630211 | orchestrator | f34386dec024 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-02 13:44:12.630223 | orchestrator | 14e2a3e81287 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-02 13:44:12.630233 | orchestrator | 1eba6c23efb7 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-06-02 13:44:12.630244 | orchestrator | a8ed981ba3eb registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 13:44:12.630255 | orchestrator | 92c855997856 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-02 13:44:12.630265 | orchestrator | c36e7d288f7c registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-02 13:44:12.917003 | orchestrator | 2025-06-02 13:44:12.917087 | orchestrator | ## Images @ testbed-node-1 2025-06-02 13:44:12.917098 | orchestrator | 2025-06-02 13:44:12.917107 | orchestrator | + echo 2025-06-02 13:44:12.917116 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-02 13:44:12.917124 | orchestrator | + echo 2025-06-02 13:44:12.917133 | orchestrator | + osism container testbed-node-1 images 2025-06-02 13:44:15.034518 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 13:44:15.034621 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 13:44:15.034635 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 13:44:15.034648 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 13:44:15.034658 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 13:44:15.034669 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 13:44:15.034680 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 13:44:15.034691 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 13:44:15.034701 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 13:44:15.034737 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 13:44:15.034749 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 13:44:15.034759 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 13:44:15.034787 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 13:44:15.034798 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 13:44:15.034809 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 13:44:15.034820 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 13:44:15.034830 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 13:44:15.034841 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 13:44:15.034852 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 13:44:15.035087 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 13:44:15.035295 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 13:44:15.035349 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 13:44:15.035364 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 13:44:15.035395 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 13:44:15.035406 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 13:44:15.035417 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 13:44:15.035576 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 13:44:15.035631 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 13:44:15.035644 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 13:44:15.035655 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 13:44:15.035666 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 13:44:15.035677 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 13:44:15.035687 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 13:44:15.035698 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 13:44:15.035744 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 13:44:15.035758 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 13:44:15.035822 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 13:44:15.035835 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 13:44:15.035846 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 13:44:15.035857 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 13:44:15.035892 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 13:44:15.035935 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 13:44:15.035950 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 13:44:15.035961 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 13:44:15.036004 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 13:44:15.036015 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 13:44:15.036026 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 13:44:15.036037 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 13:44:15.036121 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 13:44:15.036135 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 13:44:15.036146 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 13:44:15.036156 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 13:44:15.036167 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 13:44:15.036178 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 13:44:15.036234 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 13:44:15.036246 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 13:44:15.296261 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 13:44:15.296514 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 13:44:15.363889 | orchestrator | 2025-06-02 13:44:15.363945 | orchestrator | ## Containers @ testbed-node-2 2025-06-02 13:44:15.363957 | orchestrator | 2025-06-02 13:44:15.363968 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 13:44:15.363979 | orchestrator | + echo 2025-06-02 13:44:15.363991 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-02 13:44:15.364002 | orchestrator | + echo 2025-06-02 13:44:15.364035 | orchestrator | + osism container testbed-node-2 ps 2025-06-02 13:44:17.519711 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 13:44:17.519816 | orchestrator | fb808abae047 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 13:44:17.519856 | orchestrator | 4724202f6558 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 13:44:17.519869 | orchestrator | 69f971bfb051 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 13:44:17.519880 | orchestrator | 26a968163e2d registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 13:44:17.519891 | orchestrator | 4de76e2f1df5 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-02 13:44:17.519902 | orchestrator | 30df9878c5ad registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2025-06-02 13:44:17.519913 | orchestrator | dea5ce417d5a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 13:44:17.519924 | orchestrator | 5d5dd03c583f registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 13:44:17.519934 | orchestrator | 7d388e1306db registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 13:44:17.519945 | orchestrator | d85aa8606919 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 13:44:17.519956 | orchestrator | a3897c6738af registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 13:44:17.519967 | orchestrator | c988505956c8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 13:44:17.519978 | orchestrator | 5a9e7eeac9d7 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 13:44:17.519989 | orchestrator | c1a28968cbb5 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 13:44:17.520000 | orchestrator | bac4b5cd7e46 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 13:44:17.520010 | orchestrator | 4214c24bd61a registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-02 13:44:17.520021 | orchestrator | 4e01f1802946 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_api 2025-06-02 13:44:17.520032 | orchestrator | a82e555e0786 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 13:44:17.520043 | orchestrator | a1736bfa501c registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 13:44:17.520077 | orchestrator | 811b0be1bca1 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-02 13:44:17.520089 | orchestrator | 5a3993a45328 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-02 13:44:17.520100 | orchestrator | 01ba0c21ca1a registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-02 13:44:17.520111 | orchestrator | b2cfb8023545 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 13:44:17.520122 | orchestrator | 729d23ab0d8d registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-02 13:44:17.520132 | orchestrator | 3800af9169d6 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-06-02 13:44:17.520145 | orchestrator | 496db4ad4cc6 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-02 13:44:17.520156 | orchestrator | f1e0616a2960 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2025-06-02 13:44:17.520168 | orchestrator | 1585ecc0b222 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 13:44:17.520179 | orchestrator | c7c9feb977ef registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 13:44:17.520219 | orchestrator | 62972aed1df9 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 13:44:17.520249 | orchestrator | faac7afe0cfe registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-02 13:44:17.520263 | orchestrator | f03b31135e55 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-06-02 13:44:17.520276 | orchestrator | b9b6256eb53f registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 13:44:17.520290 | orchestrator | f4051c277714 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-06-02 13:44:17.520302 | orchestrator | 5ccb1e83ee81 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (unhealthy) horizon 2025-06-02 13:44:17.520315 | orchestrator | cae8f6d21ac9 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-02 13:44:17.520327 | orchestrator | b34a79cbdabc registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-02 13:44:17.520348 | orchestrator | 1b5a54a3ff70 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-02 13:44:17.520361 | orchestrator | c65e96c0d5a6 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-02 13:44:17.520378 | orchestrator | 707ea5db28b1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2025-06-02 13:44:17.520398 | orchestrator | 57f717814412 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-02 13:44:17.520411 | orchestrator | 19f734af835a registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-02 13:44:17.520424 | orchestrator | 1f3d5c1244d5 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-02 13:44:17.520436 | orchestrator | d6c495a1a796 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-06-02 13:44:17.520449 | orchestrator | fddee766cacb registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-06-02 13:44:17.520462 | orchestrator | 669fd029a0e1 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_nb_db 2025-06-02 13:44:17.520474 | orchestrator | fe41e945a8e8 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2025-06-02 13:44:17.520486 | orchestrator | a2c3e0c22d4c registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_controller 2025-06-02 13:44:17.520499 | orchestrator | ff5a2b847696 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2025-06-02 13:44:17.520511 | orchestrator | fa20a55fdee7 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-02 13:44:17.520523 | orchestrator | fcb7210cd82c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-02 13:44:17.520536 | orchestrator | f35ab9543ea8 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-02 13:44:17.520548 | orchestrator | fe25b83ba5ed registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-02 13:44:17.520562 | orchestrator | 2c53ae8fb123 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-06-02 13:44:17.520575 | orchestrator | 36bdccc5b25c registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-02 13:44:17.520592 | orchestrator | 2f28c571ebec registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-02 13:44:17.520603 | orchestrator | f3a21194bd53 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-02 13:44:17.802101 | orchestrator | 2025-06-02 13:44:17.802273 | orchestrator | ## Images @ testbed-node-2 2025-06-02 13:44:17.802300 | orchestrator | 2025-06-02 13:44:17.802318 | orchestrator | + echo 2025-06-02 13:44:17.802333 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-02 13:44:17.802349 | orchestrator | + echo 2025-06-02 13:44:17.802359 | orchestrator | + osism container testbed-node-2 images 2025-06-02 13:44:19.954802 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 13:44:19.954905 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 2 days ago 319MB 2025-06-02 13:44:19.954920 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 2 days ago 319MB 2025-06-02 13:44:19.954932 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 2 days ago 330MB 2025-06-02 13:44:19.954943 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 2 days ago 1.59GB 2025-06-02 13:44:19.954954 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 2 days ago 1.55GB 2025-06-02 13:44:19.954965 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 2 days ago 419MB 2025-06-02 13:44:19.954975 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 2 days ago 747MB 2025-06-02 13:44:19.954986 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 2 days ago 327MB 2025-06-02 13:44:19.954997 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 2 days ago 376MB 2025-06-02 13:44:19.955008 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 2 days ago 629MB 2025-06-02 13:44:19.955019 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 2 days ago 1.01GB 2025-06-02 13:44:19.955029 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 2 days ago 591MB 2025-06-02 13:44:19.955040 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 2 days ago 354MB 2025-06-02 13:44:19.955051 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 2 days ago 352MB 2025-06-02 13:44:19.955083 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 2 days ago 411MB 2025-06-02 13:44:19.955095 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 2 days ago 345MB 2025-06-02 13:44:19.955105 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 2 days ago 359MB 2025-06-02 13:44:19.955116 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 2 days ago 325MB 2025-06-02 13:44:19.955127 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 2 days ago 326MB 2025-06-02 13:44:19.955138 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 2 days ago 1.21GB 2025-06-02 13:44:19.955148 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 2 days ago 362MB 2025-06-02 13:44:19.955180 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 2 days ago 362MB 2025-06-02 13:44:19.955236 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 2 days ago 1.15GB 2025-06-02 13:44:19.955247 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 2 days ago 1.04GB 2025-06-02 13:44:19.955258 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 2 days ago 1.25GB 2025-06-02 13:44:19.955269 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 2 days ago 1.2GB 2025-06-02 13:44:19.955280 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 2 days ago 1.31GB 2025-06-02 13:44:19.955290 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 2 days ago 1.12GB 2025-06-02 13:44:19.955301 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 2 days ago 1.12GB 2025-06-02 13:44:19.955312 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 2 days ago 1.1GB 2025-06-02 13:44:19.955322 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 2 days ago 1.1GB 2025-06-02 13:44:19.955353 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 2 days ago 1.1GB 2025-06-02 13:44:19.955366 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 2 days ago 1.41GB 2025-06-02 13:44:19.955378 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 2 days ago 1.41GB 2025-06-02 13:44:19.955390 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 2 days ago 1.06GB 2025-06-02 13:44:19.955409 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 2 days ago 1.06GB 2025-06-02 13:44:19.955421 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 2 days ago 1.05GB 2025-06-02 13:44:19.955434 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 2 days ago 1.05GB 2025-06-02 13:44:19.955445 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 2 days ago 1.05GB 2025-06-02 13:44:19.955457 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 2 days ago 1.05GB 2025-06-02 13:44:19.955470 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 2 days ago 1.3GB 2025-06-02 13:44:19.955482 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 2 days ago 1.29GB 2025-06-02 13:44:19.955495 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 2 days ago 1.42GB 2025-06-02 13:44:19.955507 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 2 days ago 1.29GB 2025-06-02 13:44:19.955519 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 2 days ago 1.06GB 2025-06-02 13:44:19.955531 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 2 days ago 1.06GB 2025-06-02 13:44:19.955544 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 2 days ago 1.06GB 2025-06-02 13:44:19.955564 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 2 days ago 1.11GB 2025-06-02 13:44:19.955577 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 2 days ago 1.13GB 2025-06-02 13:44:19.955589 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 2 days ago 1.11GB 2025-06-02 13:44:19.955600 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 2 days ago 947MB 2025-06-02 13:44:19.955614 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 2 days ago 948MB 2025-06-02 13:44:19.955626 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 2 days ago 947MB 2025-06-02 13:44:19.955638 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 2 days ago 948MB 2025-06-02 13:44:19.955650 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-02 13:44:20.214403 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-02 13:44:20.224097 | orchestrator | + set -e 2025-06-02 13:44:20.224161 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 13:44:20.225728 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 13:44:20.225751 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 13:44:20.225763 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 13:44:20.225775 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 13:44:20.225787 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 13:44:20.225800 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 13:44:20.225812 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 13:44:20.225823 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 13:44:20.225835 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 13:44:20.225846 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 13:44:20.225858 | orchestrator | ++ export ARA=false 2025-06-02 13:44:20.225870 | orchestrator | ++ ARA=false 2025-06-02 13:44:20.225882 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 13:44:20.225894 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 13:44:20.225905 | orchestrator | ++ export TEMPEST=false 2025-06-02 13:44:20.225917 | orchestrator | ++ TEMPEST=false 2025-06-02 13:44:20.225928 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 13:44:20.225940 | orchestrator | ++ IS_ZUUL=true 2025-06-02 13:44:20.225957 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 13:44:20.225969 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 13:44:20.225981 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 13:44:20.225993 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 13:44:20.226005 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 13:44:20.226069 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 13:44:20.226084 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 13:44:20.226096 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 13:44:20.226107 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 13:44:20.226119 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 13:44:20.226131 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 13:44:20.226143 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-02 13:44:20.234141 | orchestrator | + set -e 2025-06-02 13:44:20.234218 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 13:44:20.234237 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 13:44:20.234253 | orchestrator | ++ INTERACTIVE=false 2025-06-02 13:44:20.234265 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 13:44:20.234277 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 13:44:20.234289 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 13:44:20.235009 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 13:44:20.243300 | orchestrator | 2025-06-02 13:44:20.243377 | orchestrator | # Ceph status 2025-06-02 13:44:20.243393 | orchestrator | 2025-06-02 13:44:20.243404 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 13:44:20.243416 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 13:44:20.243428 | orchestrator | + echo 2025-06-02 13:44:20.243439 | orchestrator | + echo '# Ceph status' 2025-06-02 13:44:20.243450 | orchestrator | + echo 2025-06-02 13:44:20.243486 | orchestrator | + ceph -s 2025-06-02 13:44:20.835072 | orchestrator | cluster: 2025-06-02 13:44:20.835177 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-02 13:44:20.835270 | orchestrator | health: HEALTH_OK 2025-06-02 13:44:20.835286 | orchestrator | 2025-06-02 13:44:20.835298 | orchestrator | services: 2025-06-02 13:44:20.835310 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2025-06-02 13:44:20.835323 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-1, testbed-node-2 2025-06-02 13:44:20.835335 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-02 13:44:20.835345 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 23m) 2025-06-02 13:44:20.835357 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-02 13:44:20.835368 | orchestrator | 2025-06-02 13:44:20.835379 | orchestrator | data: 2025-06-02 13:44:20.835390 | orchestrator | volumes: 1/1 healthy 2025-06-02 13:44:20.835400 | orchestrator | pools: 14 pools, 401 pgs 2025-06-02 13:44:20.835411 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-02 13:44:20.835422 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-02 13:44:20.835433 | orchestrator | pgs: 401 active+clean 2025-06-02 13:44:20.835444 | orchestrator | 2025-06-02 13:44:20.881086 | orchestrator | 2025-06-02 13:44:20.881166 | orchestrator | # Ceph versions 2025-06-02 13:44:20.881175 | orchestrator | 2025-06-02 13:44:20.881184 | orchestrator | + echo 2025-06-02 13:44:20.881249 | orchestrator | + echo '# Ceph versions' 2025-06-02 13:44:20.881262 | orchestrator | + echo 2025-06-02 13:44:20.881271 | orchestrator | + ceph versions 2025-06-02 13:44:21.452719 | orchestrator | { 2025-06-02 13:44:21.453602 | orchestrator | "mon": { 2025-06-02 13:44:21.453636 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 13:44:21.453649 | orchestrator | }, 2025-06-02 13:44:21.453661 | orchestrator | "mgr": { 2025-06-02 13:44:21.453672 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 13:44:21.453683 | orchestrator | }, 2025-06-02 13:44:21.453693 | orchestrator | "osd": { 2025-06-02 13:44:21.453704 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-02 13:44:21.453715 | orchestrator | }, 2025-06-02 13:44:21.453726 | orchestrator | "mds": { 2025-06-02 13:44:21.453737 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 13:44:21.453747 | orchestrator | }, 2025-06-02 13:44:21.453758 | orchestrator | "rgw": { 2025-06-02 13:44:21.453769 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 13:44:21.453779 | orchestrator | }, 2025-06-02 13:44:21.453790 | orchestrator | "overall": { 2025-06-02 13:44:21.453802 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-02 13:44:21.453814 | orchestrator | } 2025-06-02 13:44:21.453824 | orchestrator | } 2025-06-02 13:44:21.502284 | orchestrator | 2025-06-02 13:44:21.502347 | orchestrator | # Ceph OSD tree 2025-06-02 13:44:21.502362 | orchestrator | 2025-06-02 13:44:21.502374 | orchestrator | + echo 2025-06-02 13:44:21.502385 | orchestrator | + echo '# Ceph OSD tree' 2025-06-02 13:44:21.502397 | orchestrator | + echo 2025-06-02 13:44:21.502408 | orchestrator | + ceph osd df tree 2025-06-02 13:44:22.009695 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-02 13:44:22.009812 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 421 MiB 113 GiB 5.91 1.00 - root default 2025-06-02 13:44:22.009820 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2025-06-02 13:44:22.009827 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.51 1.27 191 up osd.0 2025-06-02 13:44:22.009833 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 880 MiB 811 MiB 1 KiB 70 MiB 19 GiB 4.30 0.73 197 up osd.5 2025-06-02 13:44:22.009841 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2025-06-02 13:44:22.009849 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 984 MiB 915 MiB 1 KiB 70 MiB 19 GiB 4.81 0.81 209 up osd.1 2025-06-02 13:44:22.009874 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.00 1.18 181 up osd.3 2025-06-02 13:44:22.009882 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-02 13:44:22.009888 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.30 1.23 203 up osd.2 2025-06-02 13:44:22.009895 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 928 MiB 859 MiB 1 KiB 70 MiB 19 GiB 4.54 0.77 189 up osd.4 2025-06-02 13:44:22.009902 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 421 MiB 113 GiB 5.91 2025-06-02 13:44:22.009909 | orchestrator | MIN/MAX VAR: 0.73/1.27 STDDEV: 1.38 2025-06-02 13:44:22.061837 | orchestrator | 2025-06-02 13:44:22.061914 | orchestrator | # Ceph monitor status 2025-06-02 13:44:22.061923 | orchestrator | 2025-06-02 13:44:22.061930 | orchestrator | + echo 2025-06-02 13:44:22.061938 | orchestrator | + echo '# Ceph monitor status' 2025-06-02 13:44:22.061944 | orchestrator | + echo 2025-06-02 13:44:22.061950 | orchestrator | + ceph mon stat 2025-06-02 13:44:22.674728 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-02 13:44:22.734098 | orchestrator | 2025-06-02 13:44:22.734255 | orchestrator | # Ceph quorum status 2025-06-02 13:44:22.734275 | orchestrator | 2025-06-02 13:44:22.734288 | orchestrator | + echo 2025-06-02 13:44:22.734299 | orchestrator | + echo '# Ceph quorum status' 2025-06-02 13:44:22.734310 | orchestrator | + echo 2025-06-02 13:44:22.735037 | orchestrator | + ceph quorum_status 2025-06-02 13:44:22.735061 | orchestrator | + jq 2025-06-02 13:44:23.366971 | orchestrator | { 2025-06-02 13:44:23.367071 | orchestrator | "election_epoch": 8, 2025-06-02 13:44:23.367087 | orchestrator | "quorum": [ 2025-06-02 13:44:23.367099 | orchestrator | 0, 2025-06-02 13:44:23.367110 | orchestrator | 1, 2025-06-02 13:44:23.367121 | orchestrator | 2 2025-06-02 13:44:23.367360 | orchestrator | ], 2025-06-02 13:44:23.367382 | orchestrator | "quorum_names": [ 2025-06-02 13:44:23.367394 | orchestrator | "testbed-node-0", 2025-06-02 13:44:23.367405 | orchestrator | "testbed-node-1", 2025-06-02 13:44:23.367416 | orchestrator | "testbed-node-2" 2025-06-02 13:44:23.367427 | orchestrator | ], 2025-06-02 13:44:23.367438 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-02 13:44:23.367449 | orchestrator | "quorum_age": 1594, 2025-06-02 13:44:23.367461 | orchestrator | "features": { 2025-06-02 13:44:23.367472 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-02 13:44:23.367482 | orchestrator | "quorum_mon": [ 2025-06-02 13:44:23.367493 | orchestrator | "kraken", 2025-06-02 13:44:23.367504 | orchestrator | "luminous", 2025-06-02 13:44:23.367515 | orchestrator | "mimic", 2025-06-02 13:44:23.367525 | orchestrator | "osdmap-prune", 2025-06-02 13:44:23.367536 | orchestrator | "nautilus", 2025-06-02 13:44:23.367547 | orchestrator | "octopus", 2025-06-02 13:44:23.367558 | orchestrator | "pacific", 2025-06-02 13:44:23.367568 | orchestrator | "elector-pinging", 2025-06-02 13:44:23.367579 | orchestrator | "quincy", 2025-06-02 13:44:23.367590 | orchestrator | "reef" 2025-06-02 13:44:23.367601 | orchestrator | ] 2025-06-02 13:44:23.367613 | orchestrator | }, 2025-06-02 13:44:23.367624 | orchestrator | "monmap": { 2025-06-02 13:44:23.367635 | orchestrator | "epoch": 1, 2025-06-02 13:44:23.367646 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-02 13:44:23.367657 | orchestrator | "modified": "2025-06-02T13:17:32.769006Z", 2025-06-02 13:44:23.367668 | orchestrator | "created": "2025-06-02T13:17:32.769006Z", 2025-06-02 13:44:23.367679 | orchestrator | "min_mon_release": 18, 2025-06-02 13:44:23.367690 | orchestrator | "min_mon_release_name": "reef", 2025-06-02 13:44:23.367700 | orchestrator | "election_strategy": 1, 2025-06-02 13:44:23.367729 | orchestrator | "disallowed_leaders: ": "", 2025-06-02 13:44:23.367740 | orchestrator | "stretch_mode": false, 2025-06-02 13:44:23.367751 | orchestrator | "tiebreaker_mon": "", 2025-06-02 13:44:23.367762 | orchestrator | "removed_ranks: ": "", 2025-06-02 13:44:23.367773 | orchestrator | "features": { 2025-06-02 13:44:23.367783 | orchestrator | "persistent": [ 2025-06-02 13:44:23.367794 | orchestrator | "kraken", 2025-06-02 13:44:23.367805 | orchestrator | "luminous", 2025-06-02 13:44:23.367836 | orchestrator | "mimic", 2025-06-02 13:44:23.367848 | orchestrator | "osdmap-prune", 2025-06-02 13:44:23.367859 | orchestrator | "nautilus", 2025-06-02 13:44:23.367869 | orchestrator | "octopus", 2025-06-02 13:44:23.367880 | orchestrator | "pacific", 2025-06-02 13:44:23.367891 | orchestrator | "elector-pinging", 2025-06-02 13:44:23.367902 | orchestrator | "quincy", 2025-06-02 13:44:23.367913 | orchestrator | "reef" 2025-06-02 13:44:23.367924 | orchestrator | ], 2025-06-02 13:44:23.367934 | orchestrator | "optional": [] 2025-06-02 13:44:23.367945 | orchestrator | }, 2025-06-02 13:44:23.367956 | orchestrator | "mons": [ 2025-06-02 13:44:23.367967 | orchestrator | { 2025-06-02 13:44:23.367977 | orchestrator | "rank": 0, 2025-06-02 13:44:23.367989 | orchestrator | "name": "testbed-node-0", 2025-06-02 13:44:23.368002 | orchestrator | "public_addrs": { 2025-06-02 13:44:23.368015 | orchestrator | "addrvec": [ 2025-06-02 13:44:23.368027 | orchestrator | { 2025-06-02 13:44:23.368040 | orchestrator | "type": "v2", 2025-06-02 13:44:23.368053 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-02 13:44:23.368064 | orchestrator | "nonce": 0 2025-06-02 13:44:23.368075 | orchestrator | }, 2025-06-02 13:44:23.368086 | orchestrator | { 2025-06-02 13:44:23.368096 | orchestrator | "type": "v1", 2025-06-02 13:44:23.368107 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-02 13:44:23.368118 | orchestrator | "nonce": 0 2025-06-02 13:44:23.368129 | orchestrator | } 2025-06-02 13:44:23.368140 | orchestrator | ] 2025-06-02 13:44:23.368151 | orchestrator | }, 2025-06-02 13:44:23.368162 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-02 13:44:23.368173 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-02 13:44:23.368183 | orchestrator | "priority": 0, 2025-06-02 13:44:23.368219 | orchestrator | "weight": 0, 2025-06-02 13:44:23.368230 | orchestrator | "crush_location": "{}" 2025-06-02 13:44:23.368241 | orchestrator | }, 2025-06-02 13:44:23.368252 | orchestrator | { 2025-06-02 13:44:23.368262 | orchestrator | "rank": 1, 2025-06-02 13:44:23.368273 | orchestrator | "name": "testbed-node-1", 2025-06-02 13:44:23.368284 | orchestrator | "public_addrs": { 2025-06-02 13:44:23.368295 | orchestrator | "addrvec": [ 2025-06-02 13:44:23.368305 | orchestrator | { 2025-06-02 13:44:23.368316 | orchestrator | "type": "v2", 2025-06-02 13:44:23.368327 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-02 13:44:23.368337 | orchestrator | "nonce": 0 2025-06-02 13:44:23.368348 | orchestrator | }, 2025-06-02 13:44:23.368358 | orchestrator | { 2025-06-02 13:44:23.368369 | orchestrator | "type": "v1", 2025-06-02 13:44:23.368380 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-02 13:44:23.368390 | orchestrator | "nonce": 0 2025-06-02 13:44:23.368401 | orchestrator | } 2025-06-02 13:44:23.368412 | orchestrator | ] 2025-06-02 13:44:23.368422 | orchestrator | }, 2025-06-02 13:44:23.368433 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-02 13:44:23.368444 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-02 13:44:23.368454 | orchestrator | "priority": 0, 2025-06-02 13:44:23.368465 | orchestrator | "weight": 0, 2025-06-02 13:44:23.368475 | orchestrator | "crush_location": "{}" 2025-06-02 13:44:23.368486 | orchestrator | }, 2025-06-02 13:44:23.368497 | orchestrator | { 2025-06-02 13:44:23.368507 | orchestrator | "rank": 2, 2025-06-02 13:44:23.368518 | orchestrator | "name": "testbed-node-2", 2025-06-02 13:44:23.368529 | orchestrator | "public_addrs": { 2025-06-02 13:44:23.368539 | orchestrator | "addrvec": [ 2025-06-02 13:44:23.368550 | orchestrator | { 2025-06-02 13:44:23.368561 | orchestrator | "type": "v2", 2025-06-02 13:44:23.368571 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-02 13:44:23.368582 | orchestrator | "nonce": 0 2025-06-02 13:44:23.368593 | orchestrator | }, 2025-06-02 13:44:23.368604 | orchestrator | { 2025-06-02 13:44:23.368614 | orchestrator | "type": "v1", 2025-06-02 13:44:23.368625 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-02 13:44:23.368636 | orchestrator | "nonce": 0 2025-06-02 13:44:23.368646 | orchestrator | } 2025-06-02 13:44:23.368657 | orchestrator | ] 2025-06-02 13:44:23.368667 | orchestrator | }, 2025-06-02 13:44:23.368678 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-02 13:44:23.368689 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-02 13:44:23.368700 | orchestrator | "priority": 0, 2025-06-02 13:44:23.368719 | orchestrator | "weight": 0, 2025-06-02 13:44:23.368729 | orchestrator | "crush_location": "{}" 2025-06-02 13:44:23.368740 | orchestrator | } 2025-06-02 13:44:23.368751 | orchestrator | ] 2025-06-02 13:44:23.368762 | orchestrator | } 2025-06-02 13:44:23.368773 | orchestrator | } 2025-06-02 13:44:23.368795 | orchestrator | 2025-06-02 13:44:23.368807 | orchestrator | # Ceph free space status 2025-06-02 13:44:23.368818 | orchestrator | 2025-06-02 13:44:23.368829 | orchestrator | + echo 2025-06-02 13:44:23.368840 | orchestrator | + echo '# Ceph free space status' 2025-06-02 13:44:23.368851 | orchestrator | + echo 2025-06-02 13:44:23.368862 | orchestrator | + ceph df 2025-06-02 13:44:23.975384 | orchestrator | --- RAW STORAGE --- 2025-06-02 13:44:23.975486 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-02 13:44:23.975514 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-02 13:44:23.975526 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-06-02 13:44:23.975537 | orchestrator | 2025-06-02 13:44:23.975549 | orchestrator | --- POOLS --- 2025-06-02 13:44:23.975560 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-02 13:44:23.975572 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-06-02 13:44:23.975584 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-02 13:44:23.975595 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-02 13:44:23.975606 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-02 13:44:23.975616 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-02 13:44:23.975627 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-02 13:44:23.975637 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-02 13:44:23.975648 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-02 13:44:23.975659 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2025-06-02 13:44:23.975669 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 13:44:23.975680 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 13:44:23.975690 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2025-06-02 13:44:23.975701 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 13:44:23.975711 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 13:44:24.031293 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-02 13:44:24.080812 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-02 13:44:24.080906 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-02 13:44:24.080923 | orchestrator | + osism apply facts 2025-06-02 13:44:25.809619 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:44:25.809718 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:44:25.809733 | orchestrator | Registering Redlock._release_script 2025-06-02 13:44:25.870424 | orchestrator | 2025-06-02 13:44:25 | INFO  | Task 81a9a8ec-e035-48be-8f2f-b34aaa6cc1bb (facts) was prepared for execution. 2025-06-02 13:44:25.870918 | orchestrator | 2025-06-02 13:44:25 | INFO  | It takes a moment until task 81a9a8ec-e035-48be-8f2f-b34aaa6cc1bb (facts) has been started and output is visible here. 2025-06-02 13:44:29.953850 | orchestrator | 2025-06-02 13:44:29.957402 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 13:44:29.957438 | orchestrator | 2025-06-02 13:44:29.957450 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 13:44:29.957462 | orchestrator | Monday 02 June 2025 13:44:29 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-06-02 13:44:31.099985 | orchestrator | ok: [testbed-manager] 2025-06-02 13:44:31.100787 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:44:31.102320 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:44:31.103892 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:44:31.104365 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:44:31.105592 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:44:31.107046 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:44:31.107092 | orchestrator | 2025-06-02 13:44:31.108266 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 13:44:31.108917 | orchestrator | Monday 02 June 2025 13:44:31 +0000 (0:00:01.146) 0:00:01.408 *********** 2025-06-02 13:44:31.306408 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:44:31.401670 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:44:31.500113 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:44:31.581027 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:44:31.660492 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:44:32.427711 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:44:32.428285 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:44:32.429286 | orchestrator | 2025-06-02 13:44:32.430301 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 13:44:32.431738 | orchestrator | 2025-06-02 13:44:32.432497 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 13:44:32.434106 | orchestrator | Monday 02 June 2025 13:44:32 +0000 (0:00:01.329) 0:00:02.738 *********** 2025-06-02 13:44:38.436898 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:44:38.438000 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:44:38.439520 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:44:38.440596 | orchestrator | ok: [testbed-manager] 2025-06-02 13:44:38.441909 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:44:38.442739 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:44:38.443819 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:44:38.444669 | orchestrator | 2025-06-02 13:44:38.445736 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 13:44:38.445887 | orchestrator | 2025-06-02 13:44:38.447320 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 13:44:38.447868 | orchestrator | Monday 02 June 2025 13:44:38 +0000 (0:00:06.010) 0:00:08.749 *********** 2025-06-02 13:44:38.629588 | orchestrator | skipping: [testbed-manager] 2025-06-02 13:44:38.716485 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:44:38.794302 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:44:38.875027 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:44:38.952603 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:44:38.998885 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:44:38.998957 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:44:38.999589 | orchestrator | 2025-06-02 13:44:39.000519 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:44:39.000852 | orchestrator | 2025-06-02 13:44:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:44:39.001342 | orchestrator | 2025-06-02 13:44:39 | INFO  | Please wait and do not abort execution. 2025-06-02 13:44:39.001606 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:44:39.001824 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:44:39.002176 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:44:39.002643 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:44:39.003460 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:44:39.003992 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:44:39.004364 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:44:39.004663 | orchestrator | 2025-06-02 13:44:39.005152 | orchestrator | 2025-06-02 13:44:39.005437 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:44:39.005752 | orchestrator | Monday 02 June 2025 13:44:38 +0000 (0:00:00.562) 0:00:09.312 *********** 2025-06-02 13:44:39.006109 | orchestrator | =============================================================================== 2025-06-02 13:44:39.006379 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.01s 2025-06-02 13:44:39.006660 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2025-06-02 13:44:39.006890 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2025-06-02 13:44:39.007244 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-06-02 13:44:39.723966 | orchestrator | + osism validate ceph-mons 2025-06-02 13:44:41.432916 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:44:41.433016 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:44:41.433032 | orchestrator | Registering Redlock._release_script 2025-06-02 13:45:01.372900 | orchestrator | 2025-06-02 13:45:01.373022 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-02 13:45:01.373041 | orchestrator | 2025-06-02 13:45:01.373053 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 13:45:01.373079 | orchestrator | Monday 02 June 2025 13:44:45 +0000 (0:00:00.448) 0:00:00.448 *********** 2025-06-02 13:45:01.373091 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:01.373102 | orchestrator | 2025-06-02 13:45:01.373113 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 13:45:01.373124 | orchestrator | Monday 02 June 2025 13:44:46 +0000 (0:00:00.620) 0:00:01.069 *********** 2025-06-02 13:45:01.373135 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:01.373146 | orchestrator | 2025-06-02 13:45:01.373157 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 13:45:01.373168 | orchestrator | Monday 02 June 2025 13:44:47 +0000 (0:00:00.884) 0:00:01.954 *********** 2025-06-02 13:45:01.373179 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.373190 | orchestrator | 2025-06-02 13:45:01.373201 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 13:45:01.373254 | orchestrator | Monday 02 June 2025 13:44:47 +0000 (0:00:00.284) 0:00:02.239 *********** 2025-06-02 13:45:01.373266 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.373277 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:01.373288 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:01.373299 | orchestrator | 2025-06-02 13:45:01.373309 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 13:45:01.373320 | orchestrator | Monday 02 June 2025 13:44:47 +0000 (0:00:00.298) 0:00:02.537 *********** 2025-06-02 13:45:01.373331 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.373341 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:01.373352 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:01.373363 | orchestrator | 2025-06-02 13:45:01.373374 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 13:45:01.373384 | orchestrator | Monday 02 June 2025 13:44:48 +0000 (0:00:00.969) 0:00:03.507 *********** 2025-06-02 13:45:01.373395 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.373409 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:45:01.373422 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:45:01.373435 | orchestrator | 2025-06-02 13:45:01.373449 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 13:45:01.373462 | orchestrator | Monday 02 June 2025 13:44:49 +0000 (0:00:00.313) 0:00:03.821 *********** 2025-06-02 13:45:01.373474 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.373486 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:01.373499 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:01.373511 | orchestrator | 2025-06-02 13:45:01.373524 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 13:45:01.373555 | orchestrator | Monday 02 June 2025 13:44:49 +0000 (0:00:00.529) 0:00:04.350 *********** 2025-06-02 13:45:01.373568 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.373580 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:01.373592 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:01.373605 | orchestrator | 2025-06-02 13:45:01.373618 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-02 13:45:01.373631 | orchestrator | Monday 02 June 2025 13:44:50 +0000 (0:00:00.325) 0:00:04.675 *********** 2025-06-02 13:45:01.373644 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.373657 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:45:01.373669 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:45:01.373682 | orchestrator | 2025-06-02 13:45:01.373694 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-02 13:45:01.373707 | orchestrator | Monday 02 June 2025 13:44:50 +0000 (0:00:00.288) 0:00:04.964 *********** 2025-06-02 13:45:01.373719 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.373731 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:01.373745 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:01.373763 | orchestrator | 2025-06-02 13:45:01.373781 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 13:45:01.373793 | orchestrator | Monday 02 June 2025 13:44:50 +0000 (0:00:00.303) 0:00:05.267 *********** 2025-06-02 13:45:01.373804 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.373814 | orchestrator | 2025-06-02 13:45:01.373825 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 13:45:01.373835 | orchestrator | Monday 02 June 2025 13:44:51 +0000 (0:00:00.719) 0:00:05.987 *********** 2025-06-02 13:45:01.373846 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.373857 | orchestrator | 2025-06-02 13:45:01.373867 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 13:45:01.373878 | orchestrator | Monday 02 June 2025 13:44:51 +0000 (0:00:00.244) 0:00:06.231 *********** 2025-06-02 13:45:01.373889 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.373899 | orchestrator | 2025-06-02 13:45:01.373910 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:01.373920 | orchestrator | Monday 02 June 2025 13:44:51 +0000 (0:00:00.238) 0:00:06.470 *********** 2025-06-02 13:45:01.373931 | orchestrator | 2025-06-02 13:45:01.373942 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:01.373952 | orchestrator | Monday 02 June 2025 13:44:51 +0000 (0:00:00.068) 0:00:06.539 *********** 2025-06-02 13:45:01.373963 | orchestrator | 2025-06-02 13:45:01.373973 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:01.373985 | orchestrator | Monday 02 June 2025 13:44:52 +0000 (0:00:00.072) 0:00:06.612 *********** 2025-06-02 13:45:01.373996 | orchestrator | 2025-06-02 13:45:01.374007 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 13:45:01.374063 | orchestrator | Monday 02 June 2025 13:44:52 +0000 (0:00:00.072) 0:00:06.684 *********** 2025-06-02 13:45:01.374075 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374086 | orchestrator | 2025-06-02 13:45:01.374096 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 13:45:01.374107 | orchestrator | Monday 02 June 2025 13:44:52 +0000 (0:00:00.248) 0:00:06.932 *********** 2025-06-02 13:45:01.374118 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374128 | orchestrator | 2025-06-02 13:45:01.374156 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-02 13:45:01.374168 | orchestrator | Monday 02 June 2025 13:44:52 +0000 (0:00:00.272) 0:00:07.204 *********** 2025-06-02 13:45:01.374178 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374189 | orchestrator | 2025-06-02 13:45:01.374206 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-02 13:45:01.374237 | orchestrator | Monday 02 June 2025 13:44:52 +0000 (0:00:00.115) 0:00:07.319 *********** 2025-06-02 13:45:01.374257 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:45:01.374268 | orchestrator | 2025-06-02 13:45:01.374279 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-02 13:45:01.374290 | orchestrator | Monday 02 June 2025 13:44:54 +0000 (0:00:01.556) 0:00:08.876 *********** 2025-06-02 13:45:01.374300 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374311 | orchestrator | 2025-06-02 13:45:01.374321 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-02 13:45:01.374332 | orchestrator | Monday 02 June 2025 13:44:54 +0000 (0:00:00.335) 0:00:09.212 *********** 2025-06-02 13:45:01.374342 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374353 | orchestrator | 2025-06-02 13:45:01.374364 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-02 13:45:01.374374 | orchestrator | Monday 02 June 2025 13:44:54 +0000 (0:00:00.338) 0:00:09.550 *********** 2025-06-02 13:45:01.374385 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374395 | orchestrator | 2025-06-02 13:45:01.374406 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-02 13:45:01.374416 | orchestrator | Monday 02 June 2025 13:44:55 +0000 (0:00:00.356) 0:00:09.907 *********** 2025-06-02 13:45:01.374427 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374437 | orchestrator | 2025-06-02 13:45:01.374448 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-02 13:45:01.374459 | orchestrator | Monday 02 June 2025 13:44:55 +0000 (0:00:00.344) 0:00:10.251 *********** 2025-06-02 13:45:01.374469 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374480 | orchestrator | 2025-06-02 13:45:01.374490 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-02 13:45:01.374501 | orchestrator | Monday 02 June 2025 13:44:55 +0000 (0:00:00.112) 0:00:10.363 *********** 2025-06-02 13:45:01.374511 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374522 | orchestrator | 2025-06-02 13:45:01.374533 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-02 13:45:01.374543 | orchestrator | Monday 02 June 2025 13:44:55 +0000 (0:00:00.132) 0:00:10.496 *********** 2025-06-02 13:45:01.374554 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374564 | orchestrator | 2025-06-02 13:45:01.374575 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-02 13:45:01.374586 | orchestrator | Monday 02 June 2025 13:44:56 +0000 (0:00:00.159) 0:00:10.655 *********** 2025-06-02 13:45:01.374596 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:45:01.374607 | orchestrator | 2025-06-02 13:45:01.374617 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-02 13:45:01.374628 | orchestrator | Monday 02 June 2025 13:44:57 +0000 (0:00:01.281) 0:00:11.937 *********** 2025-06-02 13:45:01.374638 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374649 | orchestrator | 2025-06-02 13:45:01.374660 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-02 13:45:01.374670 | orchestrator | Monday 02 June 2025 13:44:57 +0000 (0:00:00.335) 0:00:12.272 *********** 2025-06-02 13:45:01.374681 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374692 | orchestrator | 2025-06-02 13:45:01.374702 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-02 13:45:01.374713 | orchestrator | Monday 02 June 2025 13:44:57 +0000 (0:00:00.144) 0:00:12.416 *********** 2025-06-02 13:45:01.374723 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:01.374734 | orchestrator | 2025-06-02 13:45:01.374744 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-02 13:45:01.374755 | orchestrator | Monday 02 June 2025 13:44:57 +0000 (0:00:00.151) 0:00:12.568 *********** 2025-06-02 13:45:01.374765 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374776 | orchestrator | 2025-06-02 13:45:01.374786 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-02 13:45:01.374797 | orchestrator | Monday 02 June 2025 13:44:58 +0000 (0:00:00.143) 0:00:12.711 *********** 2025-06-02 13:45:01.374813 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374824 | orchestrator | 2025-06-02 13:45:01.374835 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 13:45:01.374845 | orchestrator | Monday 02 June 2025 13:44:58 +0000 (0:00:00.350) 0:00:13.061 *********** 2025-06-02 13:45:01.374856 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:01.374867 | orchestrator | 2025-06-02 13:45:01.374877 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 13:45:01.374888 | orchestrator | Monday 02 June 2025 13:44:58 +0000 (0:00:00.253) 0:00:13.314 *********** 2025-06-02 13:45:01.374898 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:01.374909 | orchestrator | 2025-06-02 13:45:01.374919 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 13:45:01.374930 | orchestrator | Monday 02 June 2025 13:44:58 +0000 (0:00:00.256) 0:00:13.571 *********** 2025-06-02 13:45:01.374941 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:01.374951 | orchestrator | 2025-06-02 13:45:01.374962 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 13:45:01.374972 | orchestrator | Monday 02 June 2025 13:45:00 +0000 (0:00:01.634) 0:00:15.205 *********** 2025-06-02 13:45:01.374983 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:01.374993 | orchestrator | 2025-06-02 13:45:01.375004 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 13:45:01.375015 | orchestrator | Monday 02 June 2025 13:45:00 +0000 (0:00:00.266) 0:00:15.472 *********** 2025-06-02 13:45:01.375025 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:01.375041 | orchestrator | 2025-06-02 13:45:01.375058 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:03.990657 | orchestrator | Monday 02 June 2025 13:45:01 +0000 (0:00:00.249) 0:00:15.721 *********** 2025-06-02 13:45:03.990759 | orchestrator | 2025-06-02 13:45:03.990774 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:03.990787 | orchestrator | Monday 02 June 2025 13:45:01 +0000 (0:00:00.069) 0:00:15.790 *********** 2025-06-02 13:45:03.990797 | orchestrator | 2025-06-02 13:45:03.990829 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:03.990840 | orchestrator | Monday 02 June 2025 13:45:01 +0000 (0:00:00.074) 0:00:15.865 *********** 2025-06-02 13:45:03.990851 | orchestrator | 2025-06-02 13:45:03.990861 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 13:45:03.990872 | orchestrator | Monday 02 June 2025 13:45:01 +0000 (0:00:00.077) 0:00:15.943 *********** 2025-06-02 13:45:03.990883 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:03.990894 | orchestrator | 2025-06-02 13:45:03.990904 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 13:45:03.990915 | orchestrator | Monday 02 June 2025 13:45:03 +0000 (0:00:01.682) 0:00:17.625 *********** 2025-06-02 13:45:03.990926 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 13:45:03.990937 | orchestrator |  "msg": [ 2025-06-02 13:45:03.990949 | orchestrator |  "Validator run completed.", 2025-06-02 13:45:03.990961 | orchestrator |  "You can find the report file here:", 2025-06-02 13:45:03.990972 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-02T13:44:46+00:00-report.json", 2025-06-02 13:45:03.990984 | orchestrator |  "on the following host:", 2025-06-02 13:45:03.990994 | orchestrator |  "testbed-manager" 2025-06-02 13:45:03.991005 | orchestrator |  ] 2025-06-02 13:45:03.991016 | orchestrator | } 2025-06-02 13:45:03.991027 | orchestrator | 2025-06-02 13:45:03.991038 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:45:03.991050 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-02 13:45:03.991086 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:45:03.991098 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:45:03.991108 | orchestrator | 2025-06-02 13:45:03.991119 | orchestrator | 2025-06-02 13:45:03.991130 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:45:03.991141 | orchestrator | Monday 02 June 2025 13:45:03 +0000 (0:00:00.598) 0:00:18.224 *********** 2025-06-02 13:45:03.991151 | orchestrator | =============================================================================== 2025-06-02 13:45:03.991162 | orchestrator | Write report file ------------------------------------------------------- 1.68s 2025-06-02 13:45:03.991173 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2025-06-02 13:45:03.991183 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.56s 2025-06-02 13:45:03.991194 | orchestrator | Gather status data ------------------------------------------------------ 1.28s 2025-06-02 13:45:03.991205 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-06-02 13:45:03.991264 | orchestrator | Create report output directory ------------------------------------------ 0.88s 2025-06-02 13:45:03.991275 | orchestrator | Aggregate test results step one ----------------------------------------- 0.72s 2025-06-02 13:45:03.991286 | orchestrator | Get timestamp for report file ------------------------------------------- 0.62s 2025-06-02 13:45:03.991297 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-06-02 13:45:03.991307 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-06-02 13:45:03.991318 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.36s 2025-06-02 13:45:03.991329 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.35s 2025-06-02 13:45:03.991340 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-06-02 13:45:03.991351 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.34s 2025-06-02 13:45:03.991362 | orchestrator | Set quorum test data ---------------------------------------------------- 0.34s 2025-06-02 13:45:03.991372 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2025-06-02 13:45:03.991383 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-06-02 13:45:03.991399 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-06-02 13:45:03.991418 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-06-02 13:45:03.991429 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-02 13:45:04.276515 | orchestrator | + osism validate ceph-mgrs 2025-06-02 13:45:05.969897 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:45:05.969994 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:45:05.970009 | orchestrator | Registering Redlock._release_script 2025-06-02 13:45:25.392966 | orchestrator | 2025-06-02 13:45:25.393058 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-02 13:45:25.393069 | orchestrator | 2025-06-02 13:45:25.393076 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 13:45:25.393083 | orchestrator | Monday 02 June 2025 13:45:10 +0000 (0:00:00.474) 0:00:00.474 *********** 2025-06-02 13:45:25.393090 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:25.393096 | orchestrator | 2025-06-02 13:45:25.393103 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 13:45:25.393109 | orchestrator | Monday 02 June 2025 13:45:11 +0000 (0:00:00.668) 0:00:01.142 *********** 2025-06-02 13:45:25.393128 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:25.393135 | orchestrator | 2025-06-02 13:45:25.393141 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 13:45:25.393161 | orchestrator | Monday 02 June 2025 13:45:11 +0000 (0:00:00.912) 0:00:02.055 *********** 2025-06-02 13:45:25.393168 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393175 | orchestrator | 2025-06-02 13:45:25.393181 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 13:45:25.393187 | orchestrator | Monday 02 June 2025 13:45:12 +0000 (0:00:00.263) 0:00:02.319 *********** 2025-06-02 13:45:25.393193 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393199 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:25.393205 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:25.393212 | orchestrator | 2025-06-02 13:45:25.393218 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 13:45:25.393224 | orchestrator | Monday 02 June 2025 13:45:12 +0000 (0:00:00.318) 0:00:02.637 *********** 2025-06-02 13:45:25.393284 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393291 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:25.393297 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:25.393303 | orchestrator | 2025-06-02 13:45:25.393309 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 13:45:25.393315 | orchestrator | Monday 02 June 2025 13:45:13 +0000 (0:00:00.972) 0:00:03.610 *********** 2025-06-02 13:45:25.393321 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393327 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:45:25.393333 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:45:25.393339 | orchestrator | 2025-06-02 13:45:25.393345 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 13:45:25.393351 | orchestrator | Monday 02 June 2025 13:45:13 +0000 (0:00:00.313) 0:00:03.923 *********** 2025-06-02 13:45:25.393357 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393363 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:25.393369 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:25.393375 | orchestrator | 2025-06-02 13:45:25.393381 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 13:45:25.393387 | orchestrator | Monday 02 June 2025 13:45:14 +0000 (0:00:00.550) 0:00:04.474 *********** 2025-06-02 13:45:25.393393 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393399 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:25.393405 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:25.393411 | orchestrator | 2025-06-02 13:45:25.393417 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-02 13:45:25.393423 | orchestrator | Monday 02 June 2025 13:45:14 +0000 (0:00:00.328) 0:00:04.802 *********** 2025-06-02 13:45:25.393429 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393435 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:45:25.393441 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:45:25.393447 | orchestrator | 2025-06-02 13:45:25.393453 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-02 13:45:25.393459 | orchestrator | Monday 02 June 2025 13:45:15 +0000 (0:00:00.302) 0:00:05.105 *********** 2025-06-02 13:45:25.393465 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393472 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:45:25.393478 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:45:25.393484 | orchestrator | 2025-06-02 13:45:25.393490 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 13:45:25.393496 | orchestrator | Monday 02 June 2025 13:45:15 +0000 (0:00:00.312) 0:00:05.417 *********** 2025-06-02 13:45:25.393502 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393510 | orchestrator | 2025-06-02 13:45:25.393517 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 13:45:25.393524 | orchestrator | Monday 02 June 2025 13:45:16 +0000 (0:00:00.685) 0:00:06.103 *********** 2025-06-02 13:45:25.393531 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393538 | orchestrator | 2025-06-02 13:45:25.393545 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 13:45:25.393558 | orchestrator | Monday 02 June 2025 13:45:16 +0000 (0:00:00.245) 0:00:06.348 *********** 2025-06-02 13:45:25.393565 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393572 | orchestrator | 2025-06-02 13:45:25.393579 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:25.393586 | orchestrator | Monday 02 June 2025 13:45:16 +0000 (0:00:00.236) 0:00:06.585 *********** 2025-06-02 13:45:25.393593 | orchestrator | 2025-06-02 13:45:25.393601 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:25.393608 | orchestrator | Monday 02 June 2025 13:45:16 +0000 (0:00:00.071) 0:00:06.656 *********** 2025-06-02 13:45:25.393614 | orchestrator | 2025-06-02 13:45:25.393621 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:25.393628 | orchestrator | Monday 02 June 2025 13:45:16 +0000 (0:00:00.072) 0:00:06.729 *********** 2025-06-02 13:45:25.393635 | orchestrator | 2025-06-02 13:45:25.393642 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 13:45:25.393649 | orchestrator | Monday 02 June 2025 13:45:16 +0000 (0:00:00.092) 0:00:06.821 *********** 2025-06-02 13:45:25.393656 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393663 | orchestrator | 2025-06-02 13:45:25.393670 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 13:45:25.393677 | orchestrator | Monday 02 June 2025 13:45:17 +0000 (0:00:00.248) 0:00:07.070 *********** 2025-06-02 13:45:25.393684 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393691 | orchestrator | 2025-06-02 13:45:25.393710 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-02 13:45:25.393717 | orchestrator | Monday 02 June 2025 13:45:17 +0000 (0:00:00.242) 0:00:07.312 *********** 2025-06-02 13:45:25.393724 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393731 | orchestrator | 2025-06-02 13:45:25.393739 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-02 13:45:25.393746 | orchestrator | Monday 02 June 2025 13:45:17 +0000 (0:00:00.129) 0:00:07.442 *********** 2025-06-02 13:45:25.393753 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:45:25.393761 | orchestrator | 2025-06-02 13:45:25.393768 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-02 13:45:25.393775 | orchestrator | Monday 02 June 2025 13:45:19 +0000 (0:00:01.981) 0:00:09.424 *********** 2025-06-02 13:45:25.393783 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393790 | orchestrator | 2025-06-02 13:45:25.393797 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-02 13:45:25.393804 | orchestrator | Monday 02 June 2025 13:45:19 +0000 (0:00:00.248) 0:00:09.672 *********** 2025-06-02 13:45:25.393812 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393819 | orchestrator | 2025-06-02 13:45:25.393826 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-02 13:45:25.393833 | orchestrator | Monday 02 June 2025 13:45:20 +0000 (0:00:00.844) 0:00:10.516 *********** 2025-06-02 13:45:25.393840 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393848 | orchestrator | 2025-06-02 13:45:25.393855 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-02 13:45:25.393862 | orchestrator | Monday 02 June 2025 13:45:20 +0000 (0:00:00.136) 0:00:10.653 *********** 2025-06-02 13:45:25.393868 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:45:25.393874 | orchestrator | 2025-06-02 13:45:25.393880 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 13:45:25.393886 | orchestrator | Monday 02 June 2025 13:45:20 +0000 (0:00:00.160) 0:00:10.813 *********** 2025-06-02 13:45:25.393892 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:25.393898 | orchestrator | 2025-06-02 13:45:25.393905 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 13:45:25.393911 | orchestrator | Monday 02 June 2025 13:45:21 +0000 (0:00:00.248) 0:00:11.062 *********** 2025-06-02 13:45:25.393917 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:45:25.393928 | orchestrator | 2025-06-02 13:45:25.393934 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 13:45:25.393940 | orchestrator | Monday 02 June 2025 13:45:21 +0000 (0:00:00.254) 0:00:11.317 *********** 2025-06-02 13:45:25.393946 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:25.393952 | orchestrator | 2025-06-02 13:45:25.393959 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 13:45:25.393965 | orchestrator | Monday 02 June 2025 13:45:22 +0000 (0:00:01.238) 0:00:12.555 *********** 2025-06-02 13:45:25.393971 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:25.393977 | orchestrator | 2025-06-02 13:45:25.393983 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 13:45:25.393989 | orchestrator | Monday 02 June 2025 13:45:22 +0000 (0:00:00.241) 0:00:12.797 *********** 2025-06-02 13:45:25.393995 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:25.394001 | orchestrator | 2025-06-02 13:45:25.394008 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:25.394014 | orchestrator | Monday 02 June 2025 13:45:22 +0000 (0:00:00.238) 0:00:13.035 *********** 2025-06-02 13:45:25.394061 | orchestrator | 2025-06-02 13:45:25.394067 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:25.394073 | orchestrator | Monday 02 June 2025 13:45:23 +0000 (0:00:00.071) 0:00:13.107 *********** 2025-06-02 13:45:25.394079 | orchestrator | 2025-06-02 13:45:25.394085 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:25.394091 | orchestrator | Monday 02 June 2025 13:45:23 +0000 (0:00:00.067) 0:00:13.175 *********** 2025-06-02 13:45:25.394097 | orchestrator | 2025-06-02 13:45:25.394103 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 13:45:25.394109 | orchestrator | Monday 02 June 2025 13:45:23 +0000 (0:00:00.071) 0:00:13.247 *********** 2025-06-02 13:45:25.394116 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:25.394122 | orchestrator | 2025-06-02 13:45:25.394128 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 13:45:25.394134 | orchestrator | Monday 02 June 2025 13:45:24 +0000 (0:00:01.783) 0:00:15.030 *********** 2025-06-02 13:45:25.394140 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 13:45:25.394146 | orchestrator |  "msg": [ 2025-06-02 13:45:25.394153 | orchestrator |  "Validator run completed.", 2025-06-02 13:45:25.394159 | orchestrator |  "You can find the report file here:", 2025-06-02 13:45:25.394166 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-02T13:45:10+00:00-report.json", 2025-06-02 13:45:25.394173 | orchestrator |  "on the following host:", 2025-06-02 13:45:25.394179 | orchestrator |  "testbed-manager" 2025-06-02 13:45:25.394185 | orchestrator |  ] 2025-06-02 13:45:25.394192 | orchestrator | } 2025-06-02 13:45:25.394198 | orchestrator | 2025-06-02 13:45:25.394204 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:45:25.394211 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 13:45:25.394237 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:45:25.394250 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:45:25.742691 | orchestrator | 2025-06-02 13:45:25.742794 | orchestrator | 2025-06-02 13:45:25.742808 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:45:25.742822 | orchestrator | Monday 02 June 2025 13:45:25 +0000 (0:00:00.391) 0:00:15.422 *********** 2025-06-02 13:45:25.742833 | orchestrator | =============================================================================== 2025-06-02 13:45:25.742870 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.98s 2025-06-02 13:45:25.742882 | orchestrator | Write report file ------------------------------------------------------- 1.78s 2025-06-02 13:45:25.742907 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-06-02 13:45:25.742919 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-06-02 13:45:25.742929 | orchestrator | Create report output directory ------------------------------------------ 0.91s 2025-06-02 13:45:25.742940 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.84s 2025-06-02 13:45:25.742950 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2025-06-02 13:45:25.742961 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-02 13:45:25.742971 | orchestrator | Set test result to passed if container is existing ---------------------- 0.55s 2025-06-02 13:45:25.742982 | orchestrator | Print report file information ------------------------------------------- 0.39s 2025-06-02 13:45:25.742992 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-06-02 13:45:25.743003 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2025-06-02 13:45:25.743013 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-06-02 13:45:25.743024 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-06-02 13:45:25.743034 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-06-02 13:45:25.743045 | orchestrator | Define report vars ------------------------------------------------------ 0.26s 2025-06-02 13:45:25.743055 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.25s 2025-06-02 13:45:25.743066 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.25s 2025-06-02 13:45:25.743076 | orchestrator | Print report file information ------------------------------------------- 0.25s 2025-06-02 13:45:25.743087 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.25s 2025-06-02 13:45:26.025442 | orchestrator | + osism validate ceph-osds 2025-06-02 13:45:27.745287 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:45:27.745387 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:45:27.745399 | orchestrator | Registering Redlock._release_script 2025-06-02 13:45:35.666449 | orchestrator | 2025-06-02 13:45:35.667135 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-02 13:45:35.667165 | orchestrator | 2025-06-02 13:45:35.667179 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 13:45:35.667192 | orchestrator | Monday 02 June 2025 13:45:31 +0000 (0:00:00.332) 0:00:00.332 *********** 2025-06-02 13:45:35.667206 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:35.667219 | orchestrator | 2025-06-02 13:45:35.667231 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 13:45:35.667269 | orchestrator | Monday 02 June 2025 13:45:32 +0000 (0:00:00.585) 0:00:00.918 *********** 2025-06-02 13:45:35.667282 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:35.667294 | orchestrator | 2025-06-02 13:45:35.667307 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 13:45:35.667319 | orchestrator | Monday 02 June 2025 13:45:32 +0000 (0:00:00.341) 0:00:01.259 *********** 2025-06-02 13:45:35.667331 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:35.667344 | orchestrator | 2025-06-02 13:45:35.667358 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 13:45:35.667368 | orchestrator | Monday 02 June 2025 13:45:33 +0000 (0:00:00.814) 0:00:02.074 *********** 2025-06-02 13:45:35.667379 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:35.667391 | orchestrator | 2025-06-02 13:45:35.667402 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 13:45:35.667433 | orchestrator | Monday 02 June 2025 13:45:33 +0000 (0:00:00.105) 0:00:02.180 *********** 2025-06-02 13:45:35.667444 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:35.667455 | orchestrator | 2025-06-02 13:45:35.667466 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 13:45:35.667477 | orchestrator | Monday 02 June 2025 13:45:33 +0000 (0:00:00.116) 0:00:02.296 *********** 2025-06-02 13:45:35.667487 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:35.667498 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:35.667509 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:35.667519 | orchestrator | 2025-06-02 13:45:35.667530 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 13:45:35.667541 | orchestrator | Monday 02 June 2025 13:45:34 +0000 (0:00:00.277) 0:00:02.573 *********** 2025-06-02 13:45:35.667551 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:35.667562 | orchestrator | 2025-06-02 13:45:35.667572 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 13:45:35.667583 | orchestrator | Monday 02 June 2025 13:45:34 +0000 (0:00:00.134) 0:00:02.708 *********** 2025-06-02 13:45:35.667593 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:35.667604 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:35.667614 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:35.667625 | orchestrator | 2025-06-02 13:45:35.667636 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-02 13:45:35.667646 | orchestrator | Monday 02 June 2025 13:45:34 +0000 (0:00:00.288) 0:00:02.996 *********** 2025-06-02 13:45:35.667657 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:35.667667 | orchestrator | 2025-06-02 13:45:35.667678 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 13:45:35.667689 | orchestrator | Monday 02 June 2025 13:45:35 +0000 (0:00:00.472) 0:00:03.469 *********** 2025-06-02 13:45:35.667699 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:35.667710 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:35.667720 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:35.667731 | orchestrator | 2025-06-02 13:45:35.667741 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-02 13:45:35.667752 | orchestrator | Monday 02 June 2025 13:45:35 +0000 (0:00:00.415) 0:00:03.884 *********** 2025-06-02 13:45:35.667777 | orchestrator | skipping: [testbed-node-3] => (item={'id': '97314342583f32504a7d0bbb0451ae9273f8a51a71932987ea9f3d33be35dc93', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 13:45:35.667791 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c1fb18df4d01f1aa3cc5e244e85edc64b6283a0640006bce0eb1e0f5f48bc7b1', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 13:45:35.667802 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd596b9bdaa3554d593758e88021dd761552012ddf761c3f2ab5a8e532910a0ea', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 13:45:35.667814 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e051066c787fc51a88ddd1779ece042776db97a7ace54cd5df678c9a2af29855', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 13:45:35.667826 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ef954b30818131c480292b3bd19f33833135ea0ade2b317feb33f2b33b2bc061', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 13:45:35.667855 | orchestrator | skipping: [testbed-node-3] => (item={'id': '81f691ab1191129fe7e1c4beb7ef93895686e7bd78a7821bd4c29756f0797232', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 13:45:35.667874 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65e87fe612ba8a350434363bcf95db68767d11c0841caecbd1f49b753e230793', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 13:45:35.667895 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a8ae8f19f4ee3f02cd23ae4c7ea2cb780b2b235a9ee1c5f879073956e63ab5d6', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 13:45:35.667906 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9b88b66540d2f9ea2a255b6c86772bf9c4c3cc64c63e41d054fd27e3d9884e69', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 13:45:35.667917 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f26674f5144a66495a5fa8b60e5e2099f754cf173f6b965ad6a6633c4acf7342', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 13:45:35.667929 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6da13f652da1b168c58efc8d806eed8aaec86c56f3dd0c1fa2315726da243876', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 13:45:35.667940 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1944da9ad8f8f87511032c910d6e70282e6e1e4ebc5c00d68fd06af5077b80a2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 13:45:35.667952 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ff0afe795cf1081ff9df2a8c36c0a11e7e962da8d00498f901709dfc5b5c9b1f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 13:45:35.667963 | orchestrator | ok: [testbed-node-3] => (item={'id': '6dc9a3c998554d16f7d07e8135799fc9b9ae8d20eb2548d5dab912ed0fbccd9a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 13:45:35.667974 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c1aecff4461913d64ab15c20b4f8576ebb8328b85e94427ae0b118c859353b68', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-02 13:45:35.667985 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5247f1cbe059a7cecad25a558f6038a2be8e2b32c604fa6840a182cf15ee185b', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 13:45:35.667996 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd19e5685684f9b68897448cc38d0a9ecbf01d30bdb6e7cfc03b4d5f0a2884ade', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 13:45:35.668007 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3784ad17c5577d31e8a45bc3c1c1c847d3b4d6d371451a447c688f0395f9d8df', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 13:45:35.668018 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f50723a14a5231b4d7d36f727f9bec59e9f1c4317d079e398da97de63b1799aa', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 13:45:35.668035 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f49cc3155397df7a092a029a740a3d4fc35db7fd2cc21bad9545455ebf7f3557', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 13:45:35.668052 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7dd5ef8cd6f7e7707f9279547d953e5b0552384cd682841d16b4b8a01070e263', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 13:45:35.935355 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f4fbc71d93ecd648937c2fe5c82dc1b16f78d644e76e4d422ebc16dc4d096848', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 13:45:35.935444 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9b3c8b45490c48aa9224f8fe8b5aafa555baec723e76a53f9ec13096e2d93bb1', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 13:45:35.935460 | orchestrator | skipping: [testbed-node-4] => (item={'id': '280cd0da7dac8bf4dafbc005017fd7964d4a2d0bebf25321474b59be6f2dcb61', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 13:45:35.935473 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5b09b9205b66e08f05df45d98dd897569519e1873d6f68a62caaf80be5aa0fb9', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 13:45:35.935484 | orchestrator | skipping: [testbed-node-4] => (item={'id': '67d713814af86683a6ae937d9b7c5b85e74ddb834229aa3db57eb7919b17c1cb', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 13:45:35.935495 | orchestrator | skipping: [testbed-node-4] => (item={'id': '822bfcb3bcf0583ba39f605920b2cc4084e61075fb2943d154cb9a912b6cb679', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 13:45:35.935507 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fbda72765a022c45dfa5cb7045687289af0207cbfe5aa48e02554559f9a0fe67', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 13:45:35.935518 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7332fee1a6dd4130e059968948882c54e262de458c1c953329c445f496a47d57', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 13:45:35.935549 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a907f02333713f8cea5ee31e7ed568443bbb385f1f0c38dede5f486f0045bce9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 13:45:35.935562 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6ced968ef560b7a3913c6096365bb57e8e9e82a71fa8c4b2846ac3b9d37c1d6b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 13:45:35.935573 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6e97d77c6817fba09b2639f80b3c97498349a572bd39d9d6e43af275432b475f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 13:45:35.935585 | orchestrator | ok: [testbed-node-4] => (item={'id': '23b0b82926eeaaf051d286069774e6916fe1e95f2005e831db5b8f06e326f0a8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 13:45:35.935615 | orchestrator | ok: [testbed-node-4] => (item={'id': '145c9190c5507b6c612a742c92d0c67f377ada700f0580dcf686a6492abf644d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 13:45:35.935627 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ed8e4fd2c3fcdf811bbe927ce8320557c336e6cae04bec2e5eac4370f1c789b5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-02 13:45:35.935655 | orchestrator | skipping: [testbed-node-4] => (item={'id': '469a9c0ac39d61944df90d0be00845a8f9be73f84b33998d6f44804250257842', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 13:45:35.935667 | orchestrator | skipping: [testbed-node-4] => (item={'id': '843b4877d7be03dc091e374a07ec63e80657cd7df56ceafc2f6b23ed11fd5598', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 13:45:35.935678 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd4b1b0a0c8c49839d2df602aa9627560ed85f6e55c68b1ec890d3c5d6545da4', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 13:45:35.935690 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cd10e362d5a8b93cc13eceac1de3d8e516fcc7dd3ae423d58201863f17d826bc', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 13:45:35.935701 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'deef16cef0e77734d3e7d7673053bb68eebc6eab25d497c69571c9c03950091d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 13:45:35.935712 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aee6ac829459a20fb1cbee7acca334b84e8893e0c0013de486928ae8e5c53836', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 13:45:35.935723 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3dc076bcff20724f38889416cae1daec3490f46afb7301befbba555315411aca', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 13:45:35.935733 | orchestrator | skipping: [testbed-node-5] => (item={'id': '797d4dfa0d1cc2a2169c000225da6758e4370329f09b00bc1b355e824915d2bc', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 13:45:35.935744 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1d0560a59a67bfaf5322128d78cf26691e28194ede80ee2407548d9913076ccf', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 13:45:35.935760 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e75a3113210ca63700f552a52ef17bf3150eb303ed51ab027fda9f11be879dba', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 13:45:35.935771 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c1be8bc2fc77102bcb51f17334c96d94d459bb30039eac93ed9506a6a1f51dc7', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-02 13:45:35.935789 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3bc84d157542bdf65868a5626677eab056bb19243dce7b3eba948f4762785ed3', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-02 13:45:35.935800 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a1f0e2256fd9b6c1f91b0eda73f9d15c43d4940d3fcb1e7dfdfa317a552c6a9f', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 13:45:35.935811 | orchestrator | skipping: [testbed-node-5] => (item={'id': '248106777b4e8d9004ddc60687a953e4a10cf0f072c0fa96e6f7dd2e4d09858a', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 13:45:35.935822 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cf9d70557cbac1f01350fb156b0166ce489268ee5889e7a37379bfd0a9a5497a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-02 13:45:35.935840 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f26c694315b15af43e580934fb03b9b17d31a0cd599e137ff3cd176da8b1d94', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 13:45:43.706213 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1df2f9c67d556f69cd6e5502809dfef2c6aa3ea1e75e0409d2e47d7a0cff65af', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 13:45:43.706371 | orchestrator | ok: [testbed-node-5] => (item={'id': '6b93726d52ded60a8c35e367e168863a156a0d65bf2a433b08522b17236d7242', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 13:45:43.706390 | orchestrator | ok: [testbed-node-5] => (item={'id': '9d520a8a6b68143141e2631283923a05d559846acb2f26573defe7894cb2d039', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-02 13:45:43.706403 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e340eb09b970e83c1532b5e362a66529f19b008061f55a3c4240c9d198222f8e', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-02 13:45:43.706416 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd3976ef0db74ad38b6ae220a1d0ce45518956115929f1b787b9590a7bf9437a', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 13:45:43.706429 | orchestrator | skipping: [testbed-node-5] => (item={'id': '025a23702c0045e225fa4329a9bf4085223efedc5263daa3b406247a39ee0b73', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-02 13:45:43.706440 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5524a5afe685a48fbcca891df1543117538d2e772fe7737ee55be3958a2d824e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 13:45:43.706452 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c7518edf47acaa4e3b968cbae883faa7024318fdd8e71538f6e184c692bf66f', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 13:45:43.706489 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1aeb09f3894dcfcabec6f734b108b2264abac403764f880a2c07ea07a4b22fa7', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 29 minutes'})  2025-06-02 13:45:43.706534 | orchestrator | 2025-06-02 13:45:43.706549 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-02 13:45:43.706561 | orchestrator | Monday 02 June 2025 13:45:35 +0000 (0:00:00.468) 0:00:04.353 *********** 2025-06-02 13:45:43.706572 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.706584 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:43.706595 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:43.706605 | orchestrator | 2025-06-02 13:45:43.706616 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-02 13:45:43.706627 | orchestrator | Monday 02 June 2025 13:45:36 +0000 (0:00:00.240) 0:00:04.593 *********** 2025-06-02 13:45:43.706638 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.706650 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:43.706661 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:43.706671 | orchestrator | 2025-06-02 13:45:43.706684 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-02 13:45:43.706698 | orchestrator | Monday 02 June 2025 13:45:36 +0000 (0:00:00.371) 0:00:04.965 *********** 2025-06-02 13:45:43.706710 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.706723 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:43.706736 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:43.706748 | orchestrator | 2025-06-02 13:45:43.706760 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 13:45:43.706773 | orchestrator | Monday 02 June 2025 13:45:36 +0000 (0:00:00.284) 0:00:05.249 *********** 2025-06-02 13:45:43.706786 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.706798 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:43.706810 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:43.706822 | orchestrator | 2025-06-02 13:45:43.706836 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-02 13:45:43.706849 | orchestrator | Monday 02 June 2025 13:45:37 +0000 (0:00:00.247) 0:00:05.497 *********** 2025-06-02 13:45:43.706861 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-02 13:45:43.706875 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-02 13:45:43.706888 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.706900 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-02 13:45:43.706913 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-02 13:45:43.706944 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:43.706957 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-02 13:45:43.706969 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-02 13:45:43.706982 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:43.706995 | orchestrator | 2025-06-02 13:45:43.707008 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-02 13:45:43.707021 | orchestrator | Monday 02 June 2025 13:45:37 +0000 (0:00:00.261) 0:00:05.759 *********** 2025-06-02 13:45:43.707034 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.707046 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:43.707065 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:43.707083 | orchestrator | 2025-06-02 13:45:43.707101 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 13:45:43.707121 | orchestrator | Monday 02 June 2025 13:45:37 +0000 (0:00:00.447) 0:00:06.207 *********** 2025-06-02 13:45:43.707137 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.707148 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:43.707159 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:43.707178 | orchestrator | 2025-06-02 13:45:43.707189 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 13:45:43.707200 | orchestrator | Monday 02 June 2025 13:45:38 +0000 (0:00:00.324) 0:00:06.531 *********** 2025-06-02 13:45:43.707210 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.707221 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:43.707231 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:43.707267 | orchestrator | 2025-06-02 13:45:43.707279 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-02 13:45:43.707290 | orchestrator | Monday 02 June 2025 13:45:38 +0000 (0:00:00.300) 0:00:06.831 *********** 2025-06-02 13:45:43.707302 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.707313 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:43.707323 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:43.707334 | orchestrator | 2025-06-02 13:45:43.707345 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 13:45:43.707356 | orchestrator | Monday 02 June 2025 13:45:38 +0000 (0:00:00.268) 0:00:07.100 *********** 2025-06-02 13:45:43.707366 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.707377 | orchestrator | 2025-06-02 13:45:43.707388 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 13:45:43.707398 | orchestrator | Monday 02 June 2025 13:45:39 +0000 (0:00:00.680) 0:00:07.781 *********** 2025-06-02 13:45:43.707409 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.707419 | orchestrator | 2025-06-02 13:45:43.707430 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 13:45:43.707441 | orchestrator | Monday 02 June 2025 13:45:39 +0000 (0:00:00.256) 0:00:08.037 *********** 2025-06-02 13:45:43.707451 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.707462 | orchestrator | 2025-06-02 13:45:43.707473 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:43.707483 | orchestrator | Monday 02 June 2025 13:45:39 +0000 (0:00:00.229) 0:00:08.267 *********** 2025-06-02 13:45:43.707494 | orchestrator | 2025-06-02 13:45:43.707505 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:43.707516 | orchestrator | Monday 02 June 2025 13:45:39 +0000 (0:00:00.067) 0:00:08.334 *********** 2025-06-02 13:45:43.707527 | orchestrator | 2025-06-02 13:45:43.707538 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:43.707548 | orchestrator | Monday 02 June 2025 13:45:39 +0000 (0:00:00.067) 0:00:08.402 *********** 2025-06-02 13:45:43.707559 | orchestrator | 2025-06-02 13:45:43.707570 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 13:45:43.707580 | orchestrator | Monday 02 June 2025 13:45:40 +0000 (0:00:00.069) 0:00:08.472 *********** 2025-06-02 13:45:43.707591 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.707602 | orchestrator | 2025-06-02 13:45:43.707612 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-02 13:45:43.707623 | orchestrator | Monday 02 June 2025 13:45:40 +0000 (0:00:00.244) 0:00:08.716 *********** 2025-06-02 13:45:43.707634 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:43.707644 | orchestrator | 2025-06-02 13:45:43.707655 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 13:45:43.707666 | orchestrator | Monday 02 June 2025 13:45:40 +0000 (0:00:00.210) 0:00:08.927 *********** 2025-06-02 13:45:43.707676 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.707687 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:43.707698 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:43.707708 | orchestrator | 2025-06-02 13:45:43.707719 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-02 13:45:43.707730 | orchestrator | Monday 02 June 2025 13:45:40 +0000 (0:00:00.294) 0:00:09.221 *********** 2025-06-02 13:45:43.707741 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.707751 | orchestrator | 2025-06-02 13:45:43.707762 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-02 13:45:43.707779 | orchestrator | Monday 02 June 2025 13:45:41 +0000 (0:00:00.683) 0:00:09.905 *********** 2025-06-02 13:45:43.707790 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 13:45:43.707801 | orchestrator | 2025-06-02 13:45:43.707811 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-02 13:45:43.707822 | orchestrator | Monday 02 June 2025 13:45:43 +0000 (0:00:01.662) 0:00:11.567 *********** 2025-06-02 13:45:43.707833 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.707843 | orchestrator | 2025-06-02 13:45:43.707854 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-02 13:45:43.707865 | orchestrator | Monday 02 June 2025 13:45:43 +0000 (0:00:00.132) 0:00:11.699 *********** 2025-06-02 13:45:43.707875 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:43.707886 | orchestrator | 2025-06-02 13:45:43.707897 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-02 13:45:43.707907 | orchestrator | Monday 02 June 2025 13:45:43 +0000 (0:00:00.303) 0:00:12.003 *********** 2025-06-02 13:45:43.707925 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:56.329978 | orchestrator | 2025-06-02 13:45:56.330159 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-02 13:45:56.330177 | orchestrator | Monday 02 June 2025 13:45:43 +0000 (0:00:00.130) 0:00:12.134 *********** 2025-06-02 13:45:56.330190 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.330202 | orchestrator | 2025-06-02 13:45:56.330213 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 13:45:56.330224 | orchestrator | Monday 02 June 2025 13:45:43 +0000 (0:00:00.134) 0:00:12.268 *********** 2025-06-02 13:45:56.330235 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.330246 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.330302 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.330313 | orchestrator | 2025-06-02 13:45:56.330370 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-02 13:45:56.330383 | orchestrator | Monday 02 June 2025 13:45:44 +0000 (0:00:00.300) 0:00:12.569 *********** 2025-06-02 13:45:56.330394 | orchestrator | changed: [testbed-node-3] 2025-06-02 13:45:56.330406 | orchestrator | changed: [testbed-node-4] 2025-06-02 13:45:56.330417 | orchestrator | changed: [testbed-node-5] 2025-06-02 13:45:56.330427 | orchestrator | 2025-06-02 13:45:56.330438 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-02 13:45:56.330449 | orchestrator | Monday 02 June 2025 13:45:46 +0000 (0:00:02.564) 0:00:15.133 *********** 2025-06-02 13:45:56.330460 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.330471 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.330482 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.330493 | orchestrator | 2025-06-02 13:45:56.330504 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-02 13:45:56.330517 | orchestrator | Monday 02 June 2025 13:45:47 +0000 (0:00:00.312) 0:00:15.445 *********** 2025-06-02 13:45:56.330530 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.330542 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.330555 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.330567 | orchestrator | 2025-06-02 13:45:56.330579 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-02 13:45:56.330592 | orchestrator | Monday 02 June 2025 13:45:47 +0000 (0:00:00.491) 0:00:15.937 *********** 2025-06-02 13:45:56.330604 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:56.330617 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:56.330629 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:56.330642 | orchestrator | 2025-06-02 13:45:56.330654 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-02 13:45:56.330666 | orchestrator | Monday 02 June 2025 13:45:47 +0000 (0:00:00.355) 0:00:16.293 *********** 2025-06-02 13:45:56.330678 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.330690 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.330702 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.330734 | orchestrator | 2025-06-02 13:45:56.330747 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-02 13:45:56.330759 | orchestrator | Monday 02 June 2025 13:45:48 +0000 (0:00:00.516) 0:00:16.809 *********** 2025-06-02 13:45:56.330771 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:56.330783 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:56.330796 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:56.330808 | orchestrator | 2025-06-02 13:45:56.330825 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-02 13:45:56.330843 | orchestrator | Monday 02 June 2025 13:45:48 +0000 (0:00:00.285) 0:00:17.094 *********** 2025-06-02 13:45:56.330868 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:56.330887 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:56.330905 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:56.330919 | orchestrator | 2025-06-02 13:45:56.330930 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 13:45:56.330941 | orchestrator | Monday 02 June 2025 13:45:48 +0000 (0:00:00.281) 0:00:17.375 *********** 2025-06-02 13:45:56.330952 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.330962 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.330973 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.330984 | orchestrator | 2025-06-02 13:45:56.330994 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-02 13:45:56.331005 | orchestrator | Monday 02 June 2025 13:45:49 +0000 (0:00:00.478) 0:00:17.854 *********** 2025-06-02 13:45:56.331015 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.331026 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.331037 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.331047 | orchestrator | 2025-06-02 13:45:56.331058 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-02 13:45:56.331069 | orchestrator | Monday 02 June 2025 13:45:50 +0000 (0:00:00.756) 0:00:18.611 *********** 2025-06-02 13:45:56.331079 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.331090 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.331100 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.331111 | orchestrator | 2025-06-02 13:45:56.331122 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-02 13:45:56.331132 | orchestrator | Monday 02 June 2025 13:45:50 +0000 (0:00:00.320) 0:00:18.931 *********** 2025-06-02 13:45:56.331143 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:56.331154 | orchestrator | skipping: [testbed-node-4] 2025-06-02 13:45:56.331164 | orchestrator | skipping: [testbed-node-5] 2025-06-02 13:45:56.331175 | orchestrator | 2025-06-02 13:45:56.331185 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-02 13:45:56.331196 | orchestrator | Monday 02 June 2025 13:45:50 +0000 (0:00:00.296) 0:00:19.228 *********** 2025-06-02 13:45:56.331207 | orchestrator | ok: [testbed-node-3] 2025-06-02 13:45:56.331217 | orchestrator | ok: [testbed-node-4] 2025-06-02 13:45:56.331228 | orchestrator | ok: [testbed-node-5] 2025-06-02 13:45:56.331239 | orchestrator | 2025-06-02 13:45:56.331291 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 13:45:56.331312 | orchestrator | Monday 02 June 2025 13:45:51 +0000 (0:00:00.304) 0:00:19.532 *********** 2025-06-02 13:45:56.331332 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:56.331351 | orchestrator | 2025-06-02 13:45:56.331363 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 13:45:56.331373 | orchestrator | Monday 02 June 2025 13:45:51 +0000 (0:00:00.750) 0:00:20.283 *********** 2025-06-02 13:45:56.331384 | orchestrator | skipping: [testbed-node-3] 2025-06-02 13:45:56.331395 | orchestrator | 2025-06-02 13:45:56.331427 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 13:45:56.331439 | orchestrator | Monday 02 June 2025 13:45:52 +0000 (0:00:00.262) 0:00:20.546 *********** 2025-06-02 13:45:56.331449 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:56.331471 | orchestrator | 2025-06-02 13:45:56.331482 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 13:45:56.331493 | orchestrator | Monday 02 June 2025 13:45:53 +0000 (0:00:01.564) 0:00:22.110 *********** 2025-06-02 13:45:56.331503 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:56.331514 | orchestrator | 2025-06-02 13:45:56.331525 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 13:45:56.331535 | orchestrator | Monday 02 June 2025 13:45:53 +0000 (0:00:00.259) 0:00:22.370 *********** 2025-06-02 13:45:56.331546 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:56.331557 | orchestrator | 2025-06-02 13:45:56.331567 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:56.331578 | orchestrator | Monday 02 June 2025 13:45:54 +0000 (0:00:00.238) 0:00:22.608 *********** 2025-06-02 13:45:56.331589 | orchestrator | 2025-06-02 13:45:56.331599 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:56.331610 | orchestrator | Monday 02 June 2025 13:45:54 +0000 (0:00:00.067) 0:00:22.675 *********** 2025-06-02 13:45:56.331620 | orchestrator | 2025-06-02 13:45:56.331631 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 13:45:56.331642 | orchestrator | Monday 02 June 2025 13:45:54 +0000 (0:00:00.065) 0:00:22.741 *********** 2025-06-02 13:45:56.331653 | orchestrator | 2025-06-02 13:45:56.331663 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 13:45:56.331674 | orchestrator | Monday 02 June 2025 13:45:54 +0000 (0:00:00.068) 0:00:22.809 *********** 2025-06-02 13:45:56.331685 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 13:45:56.331696 | orchestrator | 2025-06-02 13:45:56.331706 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 13:45:56.331717 | orchestrator | Monday 02 June 2025 13:45:55 +0000 (0:00:01.312) 0:00:24.122 *********** 2025-06-02 13:45:56.331727 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-02 13:45:56.331738 | orchestrator |  "msg": [ 2025-06-02 13:45:56.331750 | orchestrator |  "Validator run completed.", 2025-06-02 13:45:56.331761 | orchestrator |  "You can find the report file here:", 2025-06-02 13:45:56.331772 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-02T13:45:32+00:00-report.json", 2025-06-02 13:45:56.331783 | orchestrator |  "on the following host:", 2025-06-02 13:45:56.331794 | orchestrator |  "testbed-manager" 2025-06-02 13:45:56.331805 | orchestrator |  ] 2025-06-02 13:45:56.331817 | orchestrator | } 2025-06-02 13:45:56.331827 | orchestrator | 2025-06-02 13:45:56.331838 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:45:56.331850 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-02 13:45:56.331869 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 13:45:56.331880 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 13:45:56.331891 | orchestrator | 2025-06-02 13:45:56.331902 | orchestrator | 2025-06-02 13:45:56.331912 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:45:56.331923 | orchestrator | Monday 02 June 2025 13:45:56 +0000 (0:00:00.603) 0:00:24.726 *********** 2025-06-02 13:45:56.331934 | orchestrator | =============================================================================== 2025-06-02 13:45:56.331944 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.56s 2025-06-02 13:45:56.331955 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.66s 2025-06-02 13:45:56.331966 | orchestrator | Aggregate test results step one ----------------------------------------- 1.56s 2025-06-02 13:45:56.331983 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2025-06-02 13:45:56.331994 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2025-06-02 13:45:56.332004 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.76s 2025-06-02 13:45:56.332015 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.75s 2025-06-02 13:45:56.332025 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.68s 2025-06-02 13:45:56.332036 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2025-06-02 13:45:56.332047 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-06-02 13:45:56.332057 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-06-02 13:45:56.332068 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.52s 2025-06-02 13:45:56.332079 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-06-02 13:45:56.332089 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-02 13:45:56.332100 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.47s 2025-06-02 13:45:56.332111 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.47s 2025-06-02 13:45:56.332129 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.45s 2025-06-02 13:45:56.639145 | orchestrator | Prepare test data ------------------------------------------------------- 0.42s 2025-06-02 13:45:56.639284 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.37s 2025-06-02 13:45:56.639314 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.36s 2025-06-02 13:45:56.947791 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-02 13:45:56.958515 | orchestrator | + set -e 2025-06-02 13:45:56.958579 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 13:45:56.958594 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 13:45:56.958605 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 13:45:56.958617 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 13:45:56.958769 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 13:45:56.958790 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 13:45:56.958811 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 13:45:56.958829 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 13:45:56.958840 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 13:45:56.958852 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 13:45:56.958862 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 13:45:56.958873 | orchestrator | ++ export ARA=false 2025-06-02 13:45:56.958884 | orchestrator | ++ ARA=false 2025-06-02 13:45:56.958895 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 13:45:56.958905 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 13:45:56.958916 | orchestrator | ++ export TEMPEST=false 2025-06-02 13:45:56.958926 | orchestrator | ++ TEMPEST=false 2025-06-02 13:45:56.958937 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 13:45:56.958947 | orchestrator | ++ IS_ZUUL=true 2025-06-02 13:45:56.958958 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 13:45:56.958969 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.217 2025-06-02 13:45:56.958979 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 13:45:56.958990 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 13:45:56.959000 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 13:45:56.959011 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 13:45:56.959021 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 13:45:56.959032 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 13:45:56.959043 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 13:45:56.959053 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 13:45:56.959063 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 13:45:56.959074 | orchestrator | + source /etc/os-release 2025-06-02 13:45:56.959085 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-02 13:45:56.959095 | orchestrator | ++ NAME=Ubuntu 2025-06-02 13:45:56.959106 | orchestrator | ++ VERSION_ID=24.04 2025-06-02 13:45:56.959116 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-02 13:45:56.959134 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-02 13:45:56.959154 | orchestrator | ++ ID=ubuntu 2025-06-02 13:45:56.959218 | orchestrator | ++ ID_LIKE=debian 2025-06-02 13:45:56.959239 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-02 13:45:56.959288 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-02 13:45:56.959308 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-02 13:45:56.959327 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-02 13:45:56.959347 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-02 13:45:56.959366 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-02 13:45:56.959385 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-02 13:45:56.959405 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-02 13:45:56.959424 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 13:45:56.984289 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 13:46:17.822249 | orchestrator | 2025-06-02 13:46:17.822448 | orchestrator | # Status of Elasticsearch 2025-06-02 13:46:17.822474 | orchestrator | 2025-06-02 13:46:17.822492 | orchestrator | + pushd /opt/configuration/contrib 2025-06-02 13:46:17.822510 | orchestrator | + echo 2025-06-02 13:46:17.822526 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-02 13:46:17.822543 | orchestrator | + echo 2025-06-02 13:46:17.822559 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-02 13:46:17.999386 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-02 13:46:17.999485 | orchestrator | 2025-06-02 13:46:17.999501 | orchestrator | # Status of MariaDB 2025-06-02 13:46:17.999514 | orchestrator | 2025-06-02 13:46:17.999526 | orchestrator | + echo 2025-06-02 13:46:17.999537 | orchestrator | + echo '# Status of MariaDB' 2025-06-02 13:46:17.999548 | orchestrator | + echo 2025-06-02 13:46:17.999559 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-02 13:46:17.999571 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-02 13:46:18.058237 | orchestrator | Reading package lists... 2025-06-02 13:46:18.411966 | orchestrator | Building dependency tree... 2025-06-02 13:46:18.412412 | orchestrator | Reading state information... 2025-06-02 13:46:18.796671 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-02 13:46:18.796782 | orchestrator | bc set to manually installed. 2025-06-02 13:46:18.796811 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-02 13:46:19.453500 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-02 13:46:19.454461 | orchestrator | 2025-06-02 13:46:19.454508 | orchestrator | # Status of Prometheus 2025-06-02 13:46:19.454529 | orchestrator | 2025-06-02 13:46:19.454547 | orchestrator | + echo 2025-06-02 13:46:19.454564 | orchestrator | + echo '# Status of Prometheus' 2025-06-02 13:46:19.454581 | orchestrator | + echo 2025-06-02 13:46:19.454600 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-02 13:46:19.518476 | orchestrator | Unauthorized 2025-06-02 13:46:19.521993 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-02 13:46:19.582573 | orchestrator | Unauthorized 2025-06-02 13:46:19.586295 | orchestrator | 2025-06-02 13:46:19.586367 | orchestrator | # Status of RabbitMQ 2025-06-02 13:46:19.586391 | orchestrator | 2025-06-02 13:46:19.586409 | orchestrator | + echo 2025-06-02 13:46:19.586428 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-02 13:46:19.586446 | orchestrator | + echo 2025-06-02 13:46:19.586466 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-02 13:46:20.080080 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-02 13:46:20.089971 | orchestrator | 2025-06-02 13:46:20.090141 | orchestrator | # Status of Redis 2025-06-02 13:46:20.090159 | orchestrator | 2025-06-02 13:46:20.090171 | orchestrator | + echo 2025-06-02 13:46:20.090182 | orchestrator | + echo '# Status of Redis' 2025-06-02 13:46:20.090195 | orchestrator | + echo 2025-06-02 13:46:20.090207 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-02 13:46:20.097525 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001784s;;;0.000000;10.000000 2025-06-02 13:46:20.097998 | orchestrator | 2025-06-02 13:46:20.098060 | orchestrator | # Create backup of MariaDB database 2025-06-02 13:46:20.098074 | orchestrator | 2025-06-02 13:46:20.098086 | orchestrator | + popd 2025-06-02 13:46:20.098096 | orchestrator | + echo 2025-06-02 13:46:20.098107 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-02 13:46:20.098119 | orchestrator | + echo 2025-06-02 13:46:20.098130 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-02 13:46:21.887596 | orchestrator | 2025-06-02 13:46:21 | INFO  | Task d960dc0a-0388-4ecf-be47-5b574034a7f5 (mariadb_backup) was prepared for execution. 2025-06-02 13:46:21.887694 | orchestrator | 2025-06-02 13:46:21 | INFO  | It takes a moment until task d960dc0a-0388-4ecf-be47-5b574034a7f5 (mariadb_backup) has been started and output is visible here. 2025-06-02 13:46:25.931311 | orchestrator | 2025-06-02 13:46:25.931542 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 13:46:25.932539 | orchestrator | 2025-06-02 13:46:25.933333 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 13:46:25.934412 | orchestrator | Monday 02 June 2025 13:46:25 +0000 (0:00:00.178) 0:00:00.178 *********** 2025-06-02 13:46:26.124203 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:46:26.259155 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:46:26.260133 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:46:26.261066 | orchestrator | 2025-06-02 13:46:26.261906 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 13:46:26.262601 | orchestrator | Monday 02 June 2025 13:46:26 +0000 (0:00:00.332) 0:00:00.511 *********** 2025-06-02 13:46:26.836791 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 13:46:26.836909 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 13:46:26.837936 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 13:46:26.838297 | orchestrator | 2025-06-02 13:46:26.839047 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 13:46:26.841120 | orchestrator | 2025-06-02 13:46:26.841966 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 13:46:26.842414 | orchestrator | Monday 02 June 2025 13:46:26 +0000 (0:00:00.577) 0:00:01.089 *********** 2025-06-02 13:46:27.252421 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 13:46:27.253427 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 13:46:27.254281 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 13:46:27.255632 | orchestrator | 2025-06-02 13:46:27.256764 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 13:46:27.257079 | orchestrator | Monday 02 June 2025 13:46:27 +0000 (0:00:00.412) 0:00:01.501 *********** 2025-06-02 13:46:27.793782 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 13:46:27.794456 | orchestrator | 2025-06-02 13:46:27.795708 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-02 13:46:27.797125 | orchestrator | Monday 02 June 2025 13:46:27 +0000 (0:00:00.544) 0:00:02.045 *********** 2025-06-02 13:46:31.003948 | orchestrator | ok: [testbed-node-0] 2025-06-02 13:46:31.004642 | orchestrator | ok: [testbed-node-1] 2025-06-02 13:46:31.006112 | orchestrator | ok: [testbed-node-2] 2025-06-02 13:46:31.008384 | orchestrator | 2025-06-02 13:46:31.009666 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-02 13:46:31.010706 | orchestrator | Monday 02 June 2025 13:46:30 +0000 (0:00:03.206) 0:00:05.251 *********** 2025-06-02 13:48:50.664392 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 13:48:50.664514 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-02 13:48:50.664530 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 13:48:50.664573 | orchestrator | mariadb_bootstrap_restart 2025-06-02 13:48:50.737235 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:48:50.737688 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:48:50.739170 | orchestrator | changed: [testbed-node-0] 2025-06-02 13:48:50.740108 | orchestrator | 2025-06-02 13:48:50.741210 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 13:48:50.742191 | orchestrator | skipping: no hosts matched 2025-06-02 13:48:50.743317 | orchestrator | 2025-06-02 13:48:50.743753 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 13:48:50.744854 | orchestrator | skipping: no hosts matched 2025-06-02 13:48:50.745067 | orchestrator | 2025-06-02 13:48:50.746191 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 13:48:50.746893 | orchestrator | skipping: no hosts matched 2025-06-02 13:48:50.747117 | orchestrator | 2025-06-02 13:48:50.747286 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 13:48:50.748448 | orchestrator | 2025-06-02 13:48:50.748825 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 13:48:50.749746 | orchestrator | Monday 02 June 2025 13:48:50 +0000 (0:02:19.738) 0:02:24.990 *********** 2025-06-02 13:48:50.920309 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:48:51.032782 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:48:51.033111 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:48:51.033750 | orchestrator | 2025-06-02 13:48:51.034985 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 13:48:51.035538 | orchestrator | Monday 02 June 2025 13:48:51 +0000 (0:00:00.293) 0:02:25.283 *********** 2025-06-02 13:48:51.407559 | orchestrator | skipping: [testbed-node-0] 2025-06-02 13:48:51.453556 | orchestrator | skipping: [testbed-node-1] 2025-06-02 13:48:51.454396 | orchestrator | skipping: [testbed-node-2] 2025-06-02 13:48:51.454959 | orchestrator | 2025-06-02 13:48:51.455863 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:48:51.456201 | orchestrator | 2025-06-02 13:48:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:48:51.456509 | orchestrator | 2025-06-02 13:48:51 | INFO  | Please wait and do not abort execution. 2025-06-02 13:48:51.457516 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 13:48:51.458196 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 13:48:51.458807 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 13:48:51.459163 | orchestrator | 2025-06-02 13:48:51.460339 | orchestrator | 2025-06-02 13:48:51.461334 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:48:51.461794 | orchestrator | Monday 02 June 2025 13:48:51 +0000 (0:00:00.422) 0:02:25.705 *********** 2025-06-02 13:48:51.462289 | orchestrator | =============================================================================== 2025-06-02 13:48:51.462762 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 139.74s 2025-06-02 13:48:51.463390 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.21s 2025-06-02 13:48:51.463618 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-06-02 13:48:51.464105 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-06-02 13:48:51.464462 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2025-06-02 13:48:51.464868 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2025-06-02 13:48:51.465213 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-02 13:48:51.465554 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-06-02 13:48:52.029021 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-02 13:48:52.036732 | orchestrator | + set -e 2025-06-02 13:48:52.036792 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 13:48:52.036806 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 13:48:52.036819 | orchestrator | ++ INTERACTIVE=false 2025-06-02 13:48:52.036837 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 13:48:52.036849 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 13:48:52.036860 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 13:48:52.037695 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 13:48:52.040777 | orchestrator | 2025-06-02 13:48:52.040817 | orchestrator | # OpenStack endpoints 2025-06-02 13:48:52.040829 | orchestrator | 2025-06-02 13:48:52.040840 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-02 13:48:52.040852 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-02 13:48:52.040863 | orchestrator | + export OS_CLOUD=admin 2025-06-02 13:48:52.040874 | orchestrator | + OS_CLOUD=admin 2025-06-02 13:48:52.040885 | orchestrator | + echo 2025-06-02 13:48:52.040896 | orchestrator | + echo '# OpenStack endpoints' 2025-06-02 13:48:52.040906 | orchestrator | + echo 2025-06-02 13:48:52.040917 | orchestrator | + openstack endpoint list 2025-06-02 13:48:55.302915 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 13:48:55.303019 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-02 13:48:55.303033 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 13:48:55.303063 | orchestrator | | 11e00fafddb3454cb5b23af249867c0f | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 13:48:55.303075 | orchestrator | | 2b90422561444da39bb05a60aefa79f7 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-02 13:48:55.303102 | orchestrator | | 2ba1c7ef617449e69cb685933a09b0cb | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-02 13:48:55.303120 | orchestrator | | 2d5b13ae778a4a21b3b0586e46f98e02 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-02 13:48:55.303138 | orchestrator | | 2f3e49bb0d0d41d193fbeba89fedff86 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-02 13:48:55.303155 | orchestrator | | 33770737a35e49aeb64bc71b818cd8a8 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-02 13:48:55.303174 | orchestrator | | 33e678c383714197afe0bed78c8433f2 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-02 13:48:55.303195 | orchestrator | | 38ddcdb0ee1a4786b8a383b007e95906 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-02 13:48:55.303215 | orchestrator | | 3f0ba312891f451e8bbb9b7b84d62cd5 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 13:48:55.303235 | orchestrator | | 464bf52d96554b5baf904b48b54e560e | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-02 13:48:55.303255 | orchestrator | | 5ec2f17c828f45cd9c2c8cf51ad2537c | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-02 13:48:55.303302 | orchestrator | | 7880ed6f6d3641dcbcfd00749bf70611 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-02 13:48:55.303356 | orchestrator | | 7c8ecc0bde4d414e8ff5e245ca64f091 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-02 13:48:55.303377 | orchestrator | | 836dcc152fe04f6b8e5ff2eae885448f | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-02 13:48:55.303389 | orchestrator | | 8419a4028ed84418b929a0208d619c88 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-02 13:48:55.303399 | orchestrator | | 864f1898855b41e89621c8ad5d0e0e22 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-02 13:48:55.303410 | orchestrator | | 89441964b6de4adb8488c7f24da14958 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 13:48:55.303421 | orchestrator | | a4f248b86e8346f883becfdf7501cbdb | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-02 13:48:55.303432 | orchestrator | | bd0636d5b64d4debbb9194a84b40cbb8 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-02 13:48:55.303442 | orchestrator | | bd0684a0e1cc4253848d1131275ff366 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 13:48:55.303470 | orchestrator | | d428ecf4d3034faabac13a59eaa9b967 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-02 13:48:55.303482 | orchestrator | | e9fd52af11064ddcbe2c9dc7afb88cd3 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-02 13:48:55.303492 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 13:48:55.556547 | orchestrator | 2025-06-02 13:48:55.556641 | orchestrator | # Cinder 2025-06-02 13:48:55.556655 | orchestrator | 2025-06-02 13:48:55.556667 | orchestrator | + echo 2025-06-02 13:48:55.556680 | orchestrator | + echo '# Cinder' 2025-06-02 13:48:55.556692 | orchestrator | + echo 2025-06-02 13:48:55.556704 | orchestrator | + openstack volume service list 2025-06-02 13:48:58.284022 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 13:48:58.284153 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 13:48:58.284169 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 13:48:58.284181 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T13:48:51.000000 | 2025-06-02 13:48:58.284191 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T13:48:53.000000 | 2025-06-02 13:48:58.284202 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T13:48:53.000000 | 2025-06-02 13:48:58.284213 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-02T13:48:50.000000 | 2025-06-02 13:48:58.284223 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-02T13:48:51.000000 | 2025-06-02 13:48:58.284234 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-02T13:48:51.000000 | 2025-06-02 13:48:58.284246 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-02T13:48:52.000000 | 2025-06-02 13:48:58.284278 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-02T13:48:52.000000 | 2025-06-02 13:48:58.284289 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-02T13:48:53.000000 | 2025-06-02 13:48:58.284300 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 13:48:58.548113 | orchestrator | 2025-06-02 13:48:58.548193 | orchestrator | # Neutron 2025-06-02 13:48:58.548202 | orchestrator | 2025-06-02 13:48:58.548209 | orchestrator | + echo 2025-06-02 13:48:58.548216 | orchestrator | + echo '# Neutron' 2025-06-02 13:48:58.548225 | orchestrator | + echo 2025-06-02 13:48:58.548232 | orchestrator | + openstack network agent list 2025-06-02 13:49:01.412802 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 13:49:01.412919 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-02 13:49:01.412937 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 13:49:01.412950 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-02 13:49:01.412982 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-02 13:49:01.412994 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-02 13:49:01.413006 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-02 13:49:01.413017 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-02 13:49:01.413028 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-02 13:49:01.413039 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 13:49:01.413050 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 13:49:01.413062 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 13:49:01.413073 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 13:49:01.672403 | orchestrator | + openstack network service provider list 2025-06-02 13:49:04.263320 | orchestrator | +---------------+------+---------+ 2025-06-02 13:49:04.263484 | orchestrator | | Service Type | Name | Default | 2025-06-02 13:49:04.263502 | orchestrator | +---------------+------+---------+ 2025-06-02 13:49:04.263517 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-02 13:49:04.263528 | orchestrator | +---------------+------+---------+ 2025-06-02 13:49:04.549911 | orchestrator | + echo 2025-06-02 13:49:04.550373 | orchestrator | 2025-06-02 13:49:04.550410 | orchestrator | # Nova 2025-06-02 13:49:04.550423 | orchestrator | 2025-06-02 13:49:04.550435 | orchestrator | + echo '# Nova' 2025-06-02 13:49:04.550447 | orchestrator | + echo 2025-06-02 13:49:04.550459 | orchestrator | + openstack compute service list 2025-06-02 13:49:07.615628 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 13:49:07.615728 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 13:49:07.615739 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 13:49:07.615779 | orchestrator | | 7f46953d-9200-4cf3-8035-f1a129ddfb98 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T13:49:05.000000 | 2025-06-02 13:49:07.615788 | orchestrator | | 7a9c7ccc-ff61-4e9d-8985-c27cff3a7867 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T13:49:05.000000 | 2025-06-02 13:49:07.615795 | orchestrator | | a675df3d-35a9-474e-b54b-8011d9b2a3e9 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T13:49:00.000000 | 2025-06-02 13:49:07.615802 | orchestrator | | 03729de8-2c1e-4abd-aaf7-10385f6cde4e | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-02T13:48:59.000000 | 2025-06-02 13:49:07.615809 | orchestrator | | d3eaed85-5685-4e1f-956b-b8161c1dd648 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-02T13:48:59.000000 | 2025-06-02 13:49:07.615816 | orchestrator | | bfafc304-96ee-45cd-a5c2-486df00b4fde | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-02T13:48:59.000000 | 2025-06-02 13:49:07.615824 | orchestrator | | e0e1a136-119c-44cd-91a3-150efa694881 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-02T13:49:03.000000 | 2025-06-02 13:49:07.615831 | orchestrator | | bc21ce18-280c-4714-8089-ee8134c5151c | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-02T13:49:04.000000 | 2025-06-02 13:49:07.615838 | orchestrator | | ca7efdad-5174-48cc-9293-1a77eb84f2da | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-02T13:49:04.000000 | 2025-06-02 13:49:07.615845 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 13:49:07.914286 | orchestrator | + openstack hypervisor list 2025-06-02 13:49:12.349218 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 13:49:12.349422 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-02 13:49:12.349442 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 13:49:12.349453 | orchestrator | | a68d5014-c499-449d-a12d-048445cad20c | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-02 13:49:12.349464 | orchestrator | | d6a9189d-2fe2-434b-b23d-a50b53a3c481 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-02 13:49:12.349475 | orchestrator | | 95d72959-6e7c-411f-9c40-2a02c47f9e6e | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-02 13:49:12.349486 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 13:49:12.599757 | orchestrator | 2025-06-02 13:49:12.599856 | orchestrator | # Run OpenStack test play 2025-06-02 13:49:12.599872 | orchestrator | 2025-06-02 13:49:12.599884 | orchestrator | + echo 2025-06-02 13:49:12.599896 | orchestrator | + echo '# Run OpenStack test play' 2025-06-02 13:49:12.599907 | orchestrator | + echo 2025-06-02 13:49:12.599918 | orchestrator | + osism apply --environment openstack test 2025-06-02 13:49:14.367393 | orchestrator | 2025-06-02 13:49:14 | INFO  | Trying to run play test in environment openstack 2025-06-02 13:49:14.372103 | orchestrator | Registering Redlock._acquired_script 2025-06-02 13:49:14.372149 | orchestrator | Registering Redlock._extend_script 2025-06-02 13:49:14.372169 | orchestrator | Registering Redlock._release_script 2025-06-02 13:49:14.434438 | orchestrator | 2025-06-02 13:49:14 | INFO  | Task dc177dea-43b3-49f1-93d1-8d89d354a563 (test) was prepared for execution. 2025-06-02 13:49:14.434544 | orchestrator | 2025-06-02 13:49:14 | INFO  | It takes a moment until task dc177dea-43b3-49f1-93d1-8d89d354a563 (test) has been started and output is visible here. 2025-06-02 13:49:18.446455 | orchestrator | 2025-06-02 13:49:18.447054 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-02 13:49:18.448976 | orchestrator | 2025-06-02 13:49:18.450171 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-02 13:49:18.451411 | orchestrator | Monday 02 June 2025 13:49:18 +0000 (0:00:00.078) 0:00:00.078 *********** 2025-06-02 13:49:22.020724 | orchestrator | changed: [localhost] 2025-06-02 13:49:22.020825 | orchestrator | 2025-06-02 13:49:22.023950 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-02 13:49:22.026203 | orchestrator | Monday 02 June 2025 13:49:22 +0000 (0:00:03.571) 0:00:03.650 *********** 2025-06-02 13:49:26.163097 | orchestrator | changed: [localhost] 2025-06-02 13:49:26.164479 | orchestrator | 2025-06-02 13:49:26.164532 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-02 13:49:26.165297 | orchestrator | Monday 02 June 2025 13:49:26 +0000 (0:00:04.147) 0:00:07.797 *********** 2025-06-02 13:49:32.373397 | orchestrator | changed: [localhost] 2025-06-02 13:49:32.375754 | orchestrator | 2025-06-02 13:49:32.376939 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-02 13:49:32.378823 | orchestrator | Monday 02 June 2025 13:49:32 +0000 (0:00:06.209) 0:00:14.007 *********** 2025-06-02 13:49:36.307295 | orchestrator | changed: [localhost] 2025-06-02 13:49:36.307443 | orchestrator | 2025-06-02 13:49:36.309250 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-02 13:49:36.309520 | orchestrator | Monday 02 June 2025 13:49:36 +0000 (0:00:03.933) 0:00:17.940 *********** 2025-06-02 13:49:40.394859 | orchestrator | changed: [localhost] 2025-06-02 13:49:40.397897 | orchestrator | 2025-06-02 13:49:40.397953 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-02 13:49:40.399907 | orchestrator | Monday 02 June 2025 13:49:40 +0000 (0:00:04.087) 0:00:22.028 *********** 2025-06-02 13:49:52.117478 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-02 13:49:52.117588 | orchestrator | changed: [localhost] => (item=member) 2025-06-02 13:49:52.118160 | orchestrator | changed: [localhost] => (item=creator) 2025-06-02 13:49:52.118429 | orchestrator | 2025-06-02 13:49:52.119250 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-02 13:49:52.119971 | orchestrator | Monday 02 June 2025 13:49:52 +0000 (0:00:11.721) 0:00:33.749 *********** 2025-06-02 13:49:56.946136 | orchestrator | changed: [localhost] 2025-06-02 13:49:56.946241 | orchestrator | 2025-06-02 13:49:56.946579 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-02 13:49:56.948845 | orchestrator | Monday 02 June 2025 13:49:56 +0000 (0:00:04.829) 0:00:38.579 *********** 2025-06-02 13:50:01.585790 | orchestrator | changed: [localhost] 2025-06-02 13:50:01.586416 | orchestrator | 2025-06-02 13:50:01.587265 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-02 13:50:01.589320 | orchestrator | Monday 02 June 2025 13:50:01 +0000 (0:00:04.640) 0:00:43.219 *********** 2025-06-02 13:50:05.760586 | orchestrator | changed: [localhost] 2025-06-02 13:50:05.760705 | orchestrator | 2025-06-02 13:50:05.761851 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-02 13:50:05.763518 | orchestrator | Monday 02 June 2025 13:50:05 +0000 (0:00:04.173) 0:00:47.393 *********** 2025-06-02 13:50:09.622364 | orchestrator | changed: [localhost] 2025-06-02 13:50:09.622899 | orchestrator | 2025-06-02 13:50:09.623657 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-02 13:50:09.625351 | orchestrator | Monday 02 June 2025 13:50:09 +0000 (0:00:03.862) 0:00:51.256 *********** 2025-06-02 13:50:13.647354 | orchestrator | changed: [localhost] 2025-06-02 13:50:13.647879 | orchestrator | 2025-06-02 13:50:13.648691 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-02 13:50:13.649756 | orchestrator | Monday 02 June 2025 13:50:13 +0000 (0:00:04.025) 0:00:55.281 *********** 2025-06-02 13:50:17.516815 | orchestrator | changed: [localhost] 2025-06-02 13:50:17.516929 | orchestrator | 2025-06-02 13:50:17.518012 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-02 13:50:17.518915 | orchestrator | Monday 02 June 2025 13:50:17 +0000 (0:00:03.870) 0:00:59.152 *********** 2025-06-02 13:50:32.250073 | orchestrator | changed: [localhost] 2025-06-02 13:50:32.250158 | orchestrator | 2025-06-02 13:50:32.250166 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-02 13:50:32.250271 | orchestrator | Monday 02 June 2025 13:50:32 +0000 (0:00:14.727) 0:01:13.879 *********** 2025-06-02 13:52:48.321976 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 13:52:48.322240 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 13:52:48.322269 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 13:52:48.322291 | orchestrator | 2025-06-02 13:52:48.322311 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 13:53:18.312289 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 13:53:18.312440 | orchestrator | 2025-06-02 13:53:18.312458 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 13:53:48.312762 | orchestrator | 2025-06-02 13:53:48.312881 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 13:53:50.282534 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 13:53:50.282686 | orchestrator | 2025-06-02 13:53:50.282704 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-02 13:53:50.283227 | orchestrator | Monday 02 June 2025 13:53:50 +0000 (0:03:18.036) 0:04:31.916 *********** 2025-06-02 13:54:13.757100 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 13:54:13.757245 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 13:54:13.757264 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 13:54:13.757275 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 13:54:13.757287 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 13:54:13.757298 | orchestrator | 2025-06-02 13:54:13.757310 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-02 13:54:13.757323 | orchestrator | Monday 02 June 2025 13:54:13 +0000 (0:00:23.465) 0:04:55.382 *********** 2025-06-02 13:54:45.075407 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 13:54:45.075624 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 13:54:45.075643 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 13:54:45.075654 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 13:54:45.075665 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 13:54:45.075688 | orchestrator | 2025-06-02 13:54:45.076942 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-02 13:54:45.077067 | orchestrator | Monday 02 June 2025 13:54:45 +0000 (0:00:31.326) 0:05:26.708 *********** 2025-06-02 13:54:52.729086 | orchestrator | changed: [localhost] 2025-06-02 13:54:52.729208 | orchestrator | 2025-06-02 13:54:52.730467 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-02 13:54:52.731466 | orchestrator | Monday 02 June 2025 13:54:52 +0000 (0:00:07.653) 0:05:34.361 *********** 2025-06-02 13:55:06.265586 | orchestrator | changed: [localhost] 2025-06-02 13:55:06.265738 | orchestrator | 2025-06-02 13:55:06.265784 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-02 13:55:06.265809 | orchestrator | Monday 02 June 2025 13:55:06 +0000 (0:00:13.536) 0:05:47.897 *********** 2025-06-02 13:55:11.337508 | orchestrator | ok: [localhost] 2025-06-02 13:55:11.337633 | orchestrator | 2025-06-02 13:55:11.338637 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-02 13:55:11.338747 | orchestrator | Monday 02 June 2025 13:55:11 +0000 (0:00:05.074) 0:05:52.971 *********** 2025-06-02 13:55:11.383697 | orchestrator | ok: [localhost] => { 2025-06-02 13:55:11.384604 | orchestrator |  "msg": "192.168.112.102" 2025-06-02 13:55:11.385236 | orchestrator | } 2025-06-02 13:55:11.386566 | orchestrator | 2025-06-02 13:55:11.387190 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 13:55:11.387523 | orchestrator | 2025-06-02 13:55:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 13:55:11.387974 | orchestrator | 2025-06-02 13:55:11 | INFO  | Please wait and do not abort execution. 2025-06-02 13:55:11.389023 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 13:55:11.390313 | orchestrator | 2025-06-02 13:55:11.391175 | orchestrator | 2025-06-02 13:55:11.391475 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 13:55:11.393094 | orchestrator | Monday 02 June 2025 13:55:11 +0000 (0:00:00.046) 0:05:53.018 *********** 2025-06-02 13:55:11.393929 | orchestrator | =============================================================================== 2025-06-02 13:55:11.394613 | orchestrator | Create test instances ------------------------------------------------- 198.04s 2025-06-02 13:55:11.395053 | orchestrator | Add tag to instances --------------------------------------------------- 31.33s 2025-06-02 13:55:11.395634 | orchestrator | Add metadata to instances ---------------------------------------------- 23.47s 2025-06-02 13:55:11.396145 | orchestrator | Create test network topology ------------------------------------------- 14.73s 2025-06-02 13:55:11.396602 | orchestrator | Attach test volume ----------------------------------------------------- 13.54s 2025-06-02 13:55:11.397124 | orchestrator | Add member roles to user test ------------------------------------------ 11.72s 2025-06-02 13:55:11.397755 | orchestrator | Create test volume ------------------------------------------------------ 7.65s 2025-06-02 13:55:11.398455 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.21s 2025-06-02 13:55:11.398911 | orchestrator | Create floating ip address ---------------------------------------------- 5.07s 2025-06-02 13:55:11.399271 | orchestrator | Create test server group ------------------------------------------------ 4.83s 2025-06-02 13:55:11.399740 | orchestrator | Create ssh security group ----------------------------------------------- 4.64s 2025-06-02 13:55:11.400168 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.17s 2025-06-02 13:55:11.400916 | orchestrator | Create test-admin user -------------------------------------------------- 4.15s 2025-06-02 13:55:11.401318 | orchestrator | Create test user -------------------------------------------------------- 4.09s 2025-06-02 13:55:11.401642 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.03s 2025-06-02 13:55:11.402204 | orchestrator | Create test project ----------------------------------------------------- 3.93s 2025-06-02 13:55:11.402535 | orchestrator | Create test keypair ----------------------------------------------------- 3.87s 2025-06-02 13:55:11.403086 | orchestrator | Create icmp security group ---------------------------------------------- 3.86s 2025-06-02 13:55:11.403563 | orchestrator | Create test domain ------------------------------------------------------ 3.57s 2025-06-02 13:55:11.404085 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-06-02 13:55:11.989705 | orchestrator | + server_list 2025-06-02 13:55:11.989806 | orchestrator | + openstack --os-cloud test server list 2025-06-02 13:55:15.810584 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 13:55:15.810690 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-02 13:55:15.810705 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 13:55:15.810717 | orchestrator | | fe72833c-9f5f-4265-84f4-fb71d9266fbe | test-4 | ACTIVE | auto_allocated_network=10.42.0.14, 192.168.112.186 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 13:55:15.810729 | orchestrator | | 2ad6e443-26c9-49b3-a94d-a5443f3d1628 | test-3 | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.176 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 13:55:15.810740 | orchestrator | | 3e6ed7fa-e0ca-49b4-a954-045a3a32fbae | test-2 | ACTIVE | auto_allocated_network=10.42.0.18, 192.168.112.158 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 13:55:15.810750 | orchestrator | | be055d7d-a62f-46c1-83e8-3b7f7b0b101f | test-1 | ACTIVE | auto_allocated_network=10.42.0.62, 192.168.112.122 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 13:55:15.810761 | orchestrator | | 92f9d341-d318-4849-b80d-5062b13c3acb | test | ACTIVE | auto_allocated_network=10.42.0.47, 192.168.112.102 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 13:55:15.810796 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 13:55:16.097427 | orchestrator | + openstack --os-cloud test server show test 2025-06-02 13:55:19.328203 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:19.328312 | orchestrator | | Field | Value | 2025-06-02 13:55:19.328334 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:19.328346 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 13:55:19.328357 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 13:55:19.328368 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 13:55:19.328380 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-02 13:55:19.328391 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 13:55:19.328402 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 13:55:19.328414 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 13:55:19.328442 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 13:55:19.328471 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 13:55:19.328483 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 13:55:19.328494 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 13:55:19.328509 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 13:55:19.328520 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 13:55:19.328531 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 13:55:19.328543 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 13:55:19.328553 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T13:51:01.000000 | 2025-06-02 13:55:19.328564 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 13:55:19.328575 | orchestrator | | accessIPv4 | | 2025-06-02 13:55:19.328593 | orchestrator | | accessIPv6 | | 2025-06-02 13:55:19.328605 | orchestrator | | addresses | auto_allocated_network=10.42.0.47, 192.168.112.102 | 2025-06-02 13:55:19.328622 | orchestrator | | config_drive | | 2025-06-02 13:55:19.328634 | orchestrator | | created | 2025-06-02T13:50:40Z | 2025-06-02 13:55:19.328648 | orchestrator | | description | None | 2025-06-02 13:55:19.328660 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 13:55:19.328671 | orchestrator | | hostId | c0936d0ceeb5cc06c62c9e83341acaa287c95a35d27a3948e7ada0e5 | 2025-06-02 13:55:19.328682 | orchestrator | | host_status | None | 2025-06-02 13:55:19.328693 | orchestrator | | id | 92f9d341-d318-4849-b80d-5062b13c3acb | 2025-06-02 13:55:19.328704 | orchestrator | | image | Cirros 0.6.2 (f2546d2e-aaa2-4c4f-8989-1ee9a16704db) | 2025-06-02 13:55:19.328718 | orchestrator | | key_name | test | 2025-06-02 13:55:19.328736 | orchestrator | | locked | False | 2025-06-02 13:55:19.328749 | orchestrator | | locked_reason | None | 2025-06-02 13:55:19.328762 | orchestrator | | name | test | 2025-06-02 13:55:19.328782 | orchestrator | | pinned_availability_zone | None | 2025-06-02 13:55:19.328796 | orchestrator | | progress | 0 | 2025-06-02 13:55:19.328816 | orchestrator | | project_id | d6364aafe1d84478bca5585ddc4786ae | 2025-06-02 13:55:19.328829 | orchestrator | | properties | hostname='test' | 2025-06-02 13:55:19.328842 | orchestrator | | security_groups | name='ssh' | 2025-06-02 13:55:19.328854 | orchestrator | | | name='icmp' | 2025-06-02 13:55:19.328868 | orchestrator | | server_groups | None | 2025-06-02 13:55:19.328881 | orchestrator | | status | ACTIVE | 2025-06-02 13:55:19.328900 | orchestrator | | tags | test | 2025-06-02 13:55:19.328913 | orchestrator | | trusted_image_certificates | None | 2025-06-02 13:55:19.328925 | orchestrator | | updated | 2025-06-02T13:53:55Z | 2025-06-02 13:55:19.328943 | orchestrator | | user_id | 4d23d7752a7e4a39bffda729b7c817e2 | 2025-06-02 13:55:19.328956 | orchestrator | | volumes_attached | delete_on_termination='False', id='410edf3f-92fc-4529-900a-0e93e8ceb6c9' | 2025-06-02 13:55:19.331972 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:19.636168 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-02 13:55:22.719307 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:22.719433 | orchestrator | | Field | Value | 2025-06-02 13:55:22.719451 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:22.719463 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 13:55:22.719495 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 13:55:22.719507 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 13:55:22.719518 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-02 13:55:22.719530 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 13:55:22.719541 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 13:55:22.719586 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 13:55:22.719597 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 13:55:22.719632 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 13:55:22.719645 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 13:55:22.719656 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 13:55:22.719667 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 13:55:22.719686 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 13:55:22.719697 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 13:55:22.719709 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 13:55:22.719721 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T13:51:44.000000 | 2025-06-02 13:55:22.719732 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 13:55:22.719743 | orchestrator | | accessIPv4 | | 2025-06-02 13:55:22.719754 | orchestrator | | accessIPv6 | | 2025-06-02 13:55:22.719765 | orchestrator | | addresses | auto_allocated_network=10.42.0.62, 192.168.112.122 | 2025-06-02 13:55:22.719788 | orchestrator | | config_drive | | 2025-06-02 13:55:22.719799 | orchestrator | | created | 2025-06-02T13:51:24Z | 2025-06-02 13:55:22.719810 | orchestrator | | description | None | 2025-06-02 13:55:22.719828 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 13:55:22.719839 | orchestrator | | hostId | c345313fef2f30c9061064340c56a0f35583dbe9c3850f1c8863961f | 2025-06-02 13:55:22.719850 | orchestrator | | host_status | None | 2025-06-02 13:55:22.719861 | orchestrator | | id | be055d7d-a62f-46c1-83e8-3b7f7b0b101f | 2025-06-02 13:55:22.719872 | orchestrator | | image | Cirros 0.6.2 (f2546d2e-aaa2-4c4f-8989-1ee9a16704db) | 2025-06-02 13:55:22.719883 | orchestrator | | key_name | test | 2025-06-02 13:55:22.719894 | orchestrator | | locked | False | 2025-06-02 13:55:22.719905 | orchestrator | | locked_reason | None | 2025-06-02 13:55:22.719916 | orchestrator | | name | test-1 | 2025-06-02 13:55:22.719937 | orchestrator | | pinned_availability_zone | None | 2025-06-02 13:55:22.719949 | orchestrator | | progress | 0 | 2025-06-02 13:55:22.719973 | orchestrator | | project_id | d6364aafe1d84478bca5585ddc4786ae | 2025-06-02 13:55:22.720022 | orchestrator | | properties | hostname='test-1' | 2025-06-02 13:55:22.720035 | orchestrator | | security_groups | name='ssh' | 2025-06-02 13:55:22.720046 | orchestrator | | | name='icmp' | 2025-06-02 13:55:22.720057 | orchestrator | | server_groups | None | 2025-06-02 13:55:22.720068 | orchestrator | | status | ACTIVE | 2025-06-02 13:55:22.720079 | orchestrator | | tags | test | 2025-06-02 13:55:22.720090 | orchestrator | | trusted_image_certificates | None | 2025-06-02 13:55:22.720101 | orchestrator | | updated | 2025-06-02T13:53:59Z | 2025-06-02 13:55:22.720123 | orchestrator | | user_id | 4d23d7752a7e4a39bffda729b7c817e2 | 2025-06-02 13:55:22.720143 | orchestrator | | volumes_attached | | 2025-06-02 13:55:22.724171 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:22.968142 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-02 13:55:25.999056 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:25.999172 | orchestrator | | Field | Value | 2025-06-02 13:55:25.999189 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:25.999201 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 13:55:25.999212 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 13:55:25.999224 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 13:55:25.999235 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-02 13:55:25.999250 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 13:55:25.999268 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 13:55:25.999309 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 13:55:25.999322 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 13:55:25.999349 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 13:55:25.999361 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 13:55:25.999372 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 13:55:25.999383 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 13:55:25.999394 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 13:55:25.999405 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 13:55:25.999416 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 13:55:25.999427 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T13:52:23.000000 | 2025-06-02 13:55:25.999456 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 13:55:25.999482 | orchestrator | | accessIPv4 | | 2025-06-02 13:55:25.999494 | orchestrator | | accessIPv6 | | 2025-06-02 13:55:25.999505 | orchestrator | | addresses | auto_allocated_network=10.42.0.18, 192.168.112.158 | 2025-06-02 13:55:25.999523 | orchestrator | | config_drive | | 2025-06-02 13:55:25.999534 | orchestrator | | created | 2025-06-02T13:52:01Z | 2025-06-02 13:55:25.999545 | orchestrator | | description | None | 2025-06-02 13:55:25.999556 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 13:55:25.999570 | orchestrator | | hostId | 2971c45d99d739be013e496f717c67315eddb30e076bf768ae976cfb | 2025-06-02 13:55:25.999583 | orchestrator | | host_status | None | 2025-06-02 13:55:25.999596 | orchestrator | | id | 3e6ed7fa-e0ca-49b4-a954-045a3a32fbae | 2025-06-02 13:55:25.999609 | orchestrator | | image | Cirros 0.6.2 (f2546d2e-aaa2-4c4f-8989-1ee9a16704db) | 2025-06-02 13:55:25.999629 | orchestrator | | key_name | test | 2025-06-02 13:55:25.999647 | orchestrator | | locked | False | 2025-06-02 13:55:25.999661 | orchestrator | | locked_reason | None | 2025-06-02 13:55:25.999674 | orchestrator | | name | test-2 | 2025-06-02 13:55:25.999701 | orchestrator | | pinned_availability_zone | None | 2025-06-02 13:55:25.999722 | orchestrator | | progress | 0 | 2025-06-02 13:55:25.999741 | orchestrator | | project_id | d6364aafe1d84478bca5585ddc4786ae | 2025-06-02 13:55:25.999760 | orchestrator | | properties | hostname='test-2' | 2025-06-02 13:55:25.999780 | orchestrator | | security_groups | name='ssh' | 2025-06-02 13:55:25.999802 | orchestrator | | | name='icmp' | 2025-06-02 13:55:25.999834 | orchestrator | | server_groups | None | 2025-06-02 13:55:25.999855 | orchestrator | | status | ACTIVE | 2025-06-02 13:55:25.999868 | orchestrator | | tags | test | 2025-06-02 13:55:25.999887 | orchestrator | | trusted_image_certificates | None | 2025-06-02 13:55:25.999901 | orchestrator | | updated | 2025-06-02T13:54:04Z | 2025-06-02 13:55:25.999921 | orchestrator | | user_id | 4d23d7752a7e4a39bffda729b7c817e2 | 2025-06-02 13:55:25.999932 | orchestrator | | volumes_attached | | 2025-06-02 13:55:26.003936 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:26.331945 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-02 13:55:29.377456 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:29.377567 | orchestrator | | Field | Value | 2025-06-02 13:55:29.377583 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:29.377619 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 13:55:29.377631 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 13:55:29.377643 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 13:55:29.377667 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-02 13:55:29.377679 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 13:55:29.377690 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 13:55:29.377701 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 13:55:29.377713 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 13:55:29.377743 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 13:55:29.377755 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 13:55:29.377766 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 13:55:29.377785 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 13:55:29.377796 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 13:55:29.377807 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 13:55:29.377819 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 13:55:29.377835 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T13:53:04.000000 | 2025-06-02 13:55:29.377846 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 13:55:29.377857 | orchestrator | | accessIPv4 | | 2025-06-02 13:55:29.377868 | orchestrator | | accessIPv6 | | 2025-06-02 13:55:29.377879 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.176 | 2025-06-02 13:55:29.377898 | orchestrator | | config_drive | | 2025-06-02 13:55:29.377917 | orchestrator | | created | 2025-06-02T13:52:45Z | 2025-06-02 13:55:29.377928 | orchestrator | | description | None | 2025-06-02 13:55:29.377939 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 13:55:29.377950 | orchestrator | | hostId | c345313fef2f30c9061064340c56a0f35583dbe9c3850f1c8863961f | 2025-06-02 13:55:29.377961 | orchestrator | | host_status | None | 2025-06-02 13:55:29.377972 | orchestrator | | id | 2ad6e443-26c9-49b3-a94d-a5443f3d1628 | 2025-06-02 13:55:29.378073 | orchestrator | | image | Cirros 0.6.2 (f2546d2e-aaa2-4c4f-8989-1ee9a16704db) | 2025-06-02 13:55:29.378089 | orchestrator | | key_name | test | 2025-06-02 13:55:29.378101 | orchestrator | | locked | False | 2025-06-02 13:55:29.378112 | orchestrator | | locked_reason | None | 2025-06-02 13:55:29.378123 | orchestrator | | name | test-3 | 2025-06-02 13:55:29.378159 | orchestrator | | pinned_availability_zone | None | 2025-06-02 13:55:29.378171 | orchestrator | | progress | 0 | 2025-06-02 13:55:29.378182 | orchestrator | | project_id | d6364aafe1d84478bca5585ddc4786ae | 2025-06-02 13:55:29.378193 | orchestrator | | properties | hostname='test-3' | 2025-06-02 13:55:29.378204 | orchestrator | | security_groups | name='ssh' | 2025-06-02 13:55:29.378215 | orchestrator | | | name='icmp' | 2025-06-02 13:55:29.378227 | orchestrator | | server_groups | None | 2025-06-02 13:55:29.378238 | orchestrator | | status | ACTIVE | 2025-06-02 13:55:29.378249 | orchestrator | | tags | test | 2025-06-02 13:55:29.378260 | orchestrator | | trusted_image_certificates | None | 2025-06-02 13:55:29.378279 | orchestrator | | updated | 2025-06-02T13:54:08Z | 2025-06-02 13:55:29.378304 | orchestrator | | user_id | 4d23d7752a7e4a39bffda729b7c817e2 | 2025-06-02 13:55:29.378315 | orchestrator | | volumes_attached | | 2025-06-02 13:55:29.378327 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:29.630825 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-02 13:55:32.726958 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:32.727113 | orchestrator | | Field | Value | 2025-06-02 13:55:32.727130 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:32.727142 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 13:55:32.727170 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 13:55:32.727182 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 13:55:32.727194 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-02 13:55:32.727205 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 13:55:32.727237 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 13:55:32.727249 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 13:55:32.727260 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 13:55:32.727288 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 13:55:32.727300 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 13:55:32.727311 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 13:55:32.727322 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 13:55:32.727333 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 13:55:32.727350 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 13:55:32.727361 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 13:55:32.727380 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T13:53:39.000000 | 2025-06-02 13:55:32.727391 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 13:55:32.727402 | orchestrator | | accessIPv4 | | 2025-06-02 13:55:32.727413 | orchestrator | | accessIPv6 | | 2025-06-02 13:55:32.727425 | orchestrator | | addresses | auto_allocated_network=10.42.0.14, 192.168.112.186 | 2025-06-02 13:55:32.727442 | orchestrator | | config_drive | | 2025-06-02 13:55:32.727454 | orchestrator | | created | 2025-06-02T13:53:23Z | 2025-06-02 13:55:32.727465 | orchestrator | | description | None | 2025-06-02 13:55:32.727476 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 13:55:32.727492 | orchestrator | | hostId | c0936d0ceeb5cc06c62c9e83341acaa287c95a35d27a3948e7ada0e5 | 2025-06-02 13:55:32.727503 | orchestrator | | host_status | None | 2025-06-02 13:55:32.727521 | orchestrator | | id | fe72833c-9f5f-4265-84f4-fb71d9266fbe | 2025-06-02 13:55:32.727532 | orchestrator | | image | Cirros 0.6.2 (f2546d2e-aaa2-4c4f-8989-1ee9a16704db) | 2025-06-02 13:55:32.727543 | orchestrator | | key_name | test | 2025-06-02 13:55:32.727554 | orchestrator | | locked | False | 2025-06-02 13:55:32.727565 | orchestrator | | locked_reason | None | 2025-06-02 13:55:32.727576 | orchestrator | | name | test-4 | 2025-06-02 13:55:32.727594 | orchestrator | | pinned_availability_zone | None | 2025-06-02 13:55:32.727605 | orchestrator | | progress | 0 | 2025-06-02 13:55:32.727616 | orchestrator | | project_id | d6364aafe1d84478bca5585ddc4786ae | 2025-06-02 13:55:32.727628 | orchestrator | | properties | hostname='test-4' | 2025-06-02 13:55:32.727644 | orchestrator | | security_groups | name='ssh' | 2025-06-02 13:55:32.727662 | orchestrator | | | name='icmp' | 2025-06-02 13:55:32.727673 | orchestrator | | server_groups | None | 2025-06-02 13:55:32.727684 | orchestrator | | status | ACTIVE | 2025-06-02 13:55:32.727695 | orchestrator | | tags | test | 2025-06-02 13:55:32.727706 | orchestrator | | trusted_image_certificates | None | 2025-06-02 13:55:32.727717 | orchestrator | | updated | 2025-06-02T13:54:13Z | 2025-06-02 13:55:32.727733 | orchestrator | | user_id | 4d23d7752a7e4a39bffda729b7c817e2 | 2025-06-02 13:55:32.727745 | orchestrator | | volumes_attached | | 2025-06-02 13:55:32.727756 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 13:55:33.036401 | orchestrator | + server_ping 2025-06-02 13:55:33.039008 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 13:55:33.039047 | orchestrator | ++ tr -d '\r' 2025-06-02 13:55:35.856215 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 13:55:35.856313 | orchestrator | + ping -c3 192.168.112.176 2025-06-02 13:55:35.870492 | orchestrator | PING 192.168.112.176 (192.168.112.176) 56(84) bytes of data. 2025-06-02 13:55:35.870609 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=1 ttl=63 time=8.40 ms 2025-06-02 13:55:36.867046 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=2 ttl=63 time=3.16 ms 2025-06-02 13:55:37.868184 | orchestrator | 64 bytes from 192.168.112.176: icmp_seq=3 ttl=63 time=2.22 ms 2025-06-02 13:55:37.868275 | orchestrator | 2025-06-02 13:55:37.868291 | orchestrator | --- 192.168.112.176 ping statistics --- 2025-06-02 13:55:37.868304 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 13:55:37.868317 | orchestrator | rtt min/avg/max/mdev = 2.224/4.594/8.403/2.720 ms 2025-06-02 13:55:37.868884 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 13:55:37.868909 | orchestrator | + ping -c3 192.168.112.122 2025-06-02 13:55:37.879200 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-06-02 13:55:37.879236 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=5.92 ms 2025-06-02 13:55:38.876950 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.52 ms 2025-06-02 13:55:39.877961 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.97 ms 2025-06-02 13:55:39.878161 | orchestrator | 2025-06-02 13:55:39.878178 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-06-02 13:55:39.878191 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 13:55:39.878202 | orchestrator | rtt min/avg/max/mdev = 1.967/3.469/5.919/1.746 ms 2025-06-02 13:55:39.878214 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 13:55:39.878226 | orchestrator | + ping -c3 192.168.112.158 2025-06-02 13:55:39.891208 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2025-06-02 13:55:39.891245 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=8.99 ms 2025-06-02 13:55:40.886534 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.77 ms 2025-06-02 13:55:41.889043 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=2.24 ms 2025-06-02 13:55:41.889139 | orchestrator | 2025-06-02 13:55:41.889153 | orchestrator | --- 192.168.112.158 ping statistics --- 2025-06-02 13:55:41.889165 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 13:55:41.889177 | orchestrator | rtt min/avg/max/mdev = 2.243/4.669/8.992/3.064 ms 2025-06-02 13:55:41.889188 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 13:55:41.889199 | orchestrator | + ping -c3 192.168.112.186 2025-06-02 13:55:41.901219 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-06-02 13:55:41.901277 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=7.89 ms 2025-06-02 13:55:42.897220 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.41 ms 2025-06-02 13:55:43.899318 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=2.63 ms 2025-06-02 13:55:43.899426 | orchestrator | 2025-06-02 13:55:43.899444 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-06-02 13:55:43.899457 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 13:55:43.899468 | orchestrator | rtt min/avg/max/mdev = 2.405/4.308/7.892/2.535 ms 2025-06-02 13:55:43.899900 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 13:55:43.899925 | orchestrator | + ping -c3 192.168.112.102 2025-06-02 13:55:43.910819 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2025-06-02 13:55:43.910878 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=6.84 ms 2025-06-02 13:55:44.907709 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=2.44 ms 2025-06-02 13:55:45.909312 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=2.17 ms 2025-06-02 13:55:45.909413 | orchestrator | 2025-06-02 13:55:45.909430 | orchestrator | --- 192.168.112.102 ping statistics --- 2025-06-02 13:55:45.909442 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 13:55:45.909454 | orchestrator | rtt min/avg/max/mdev = 2.173/3.818/6.839/2.138 ms 2025-06-02 13:55:45.910657 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-02 13:55:46.007766 | orchestrator | ok: Runtime: 0:11:45.597899 2025-06-02 13:55:46.076497 | 2025-06-02 13:55:46.076664 | TASK [Run tempest] 2025-06-02 13:55:46.612963 | orchestrator | skipping: Conditional result was False 2025-06-02 13:55:46.633211 | 2025-06-02 13:55:46.633388 | TASK [Check prometheus alert status] 2025-06-02 13:55:47.170347 | orchestrator | skipping: Conditional result was False 2025-06-02 13:55:47.173312 | 2025-06-02 13:55:47.173555 | PLAY RECAP 2025-06-02 13:55:47.173678 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-02 13:55:47.173751 | 2025-06-02 13:55:47.416113 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 13:55:47.418560 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 13:55:48.202094 | 2025-06-02 13:55:48.202338 | PLAY [Post output play] 2025-06-02 13:55:48.220568 | 2025-06-02 13:55:48.220773 | LOOP [stage-output : Register sources] 2025-06-02 13:55:48.293266 | 2025-06-02 13:55:48.293610 | TASK [stage-output : Check sudo] 2025-06-02 13:55:49.179789 | orchestrator | sudo: a password is required 2025-06-02 13:55:49.331852 | orchestrator | ok: Runtime: 0:00:00.013130 2025-06-02 13:55:49.349199 | 2025-06-02 13:55:49.349381 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 13:55:49.386498 | 2025-06-02 13:55:49.386745 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 13:55:49.453914 | orchestrator | ok 2025-06-02 13:55:49.462774 | 2025-06-02 13:55:49.462948 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 13:55:49.916520 | orchestrator | ok: "docs" 2025-06-02 13:55:49.916993 | 2025-06-02 13:55:50.159502 | orchestrator | ok: "artifacts" 2025-06-02 13:55:50.398911 | orchestrator | ok: "logs" 2025-06-02 13:55:50.422101 | 2025-06-02 13:55:50.422300 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 13:55:50.462666 | 2025-06-02 13:55:50.463069 | TASK [stage-output : Make all log files readable] 2025-06-02 13:55:50.756783 | orchestrator | ok 2025-06-02 13:55:50.766994 | 2025-06-02 13:55:50.767138 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 13:55:50.802815 | orchestrator | skipping: Conditional result was False 2025-06-02 13:55:50.824977 | 2025-06-02 13:55:50.825118 | TASK [stage-output : Discover log files for compression] 2025-06-02 13:55:50.851353 | orchestrator | skipping: Conditional result was False 2025-06-02 13:55:50.858635 | 2025-06-02 13:55:50.858765 | LOOP [stage-output : Archive everything from logs] 2025-06-02 13:55:50.893950 | 2025-06-02 13:55:50.894108 | PLAY [Post cleanup play] 2025-06-02 13:55:50.901961 | 2025-06-02 13:55:50.902065 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 13:55:50.959825 | orchestrator | ok 2025-06-02 13:55:50.971657 | 2025-06-02 13:55:50.971833 | TASK [Set cloud fact (local deployment)] 2025-06-02 13:55:51.005946 | orchestrator | skipping: Conditional result was False 2025-06-02 13:55:51.023374 | 2025-06-02 13:55:51.023541 | TASK [Clean the cloud environment] 2025-06-02 13:55:51.644510 | orchestrator | 2025-06-02 13:55:51 - clean up servers 2025-06-02 13:55:52.387001 | orchestrator | 2025-06-02 13:55:52 - testbed-manager 2025-06-02 13:55:52.475144 | orchestrator | 2025-06-02 13:55:52 - testbed-node-3 2025-06-02 13:55:52.566664 | orchestrator | 2025-06-02 13:55:52 - testbed-node-0 2025-06-02 13:55:52.652833 | orchestrator | 2025-06-02 13:55:52 - testbed-node-5 2025-06-02 13:55:52.747221 | orchestrator | 2025-06-02 13:55:52 - testbed-node-1 2025-06-02 13:55:52.843090 | orchestrator | 2025-06-02 13:55:52 - testbed-node-2 2025-06-02 13:55:52.942788 | orchestrator | 2025-06-02 13:55:52 - testbed-node-4 2025-06-02 13:55:53.038257 | orchestrator | 2025-06-02 13:55:53 - clean up keypairs 2025-06-02 13:55:53.055198 | orchestrator | 2025-06-02 13:55:53 - testbed 2025-06-02 13:55:53.087649 | orchestrator | 2025-06-02 13:55:53 - wait for servers to be gone 2025-06-02 13:56:03.884955 | orchestrator | 2025-06-02 13:56:03 - clean up ports 2025-06-02 13:56:04.059643 | orchestrator | 2025-06-02 13:56:04 - 0156630e-d53a-43f7-95b5-72f541d93749 2025-06-02 13:56:04.308748 | orchestrator | 2025-06-02 13:56:04 - 09c930e0-bde0-467b-9ee1-4772701eb920 2025-06-02 13:56:04.593438 | orchestrator | 2025-06-02 13:56:04 - 715bf2e4-92b9-48fb-a1f7-adb5de76e4a4 2025-06-02 13:56:04.814139 | orchestrator | 2025-06-02 13:56:04 - 86ce23f3-e5c3-408e-b5d1-476265112df7 2025-06-02 13:56:05.025810 | orchestrator | 2025-06-02 13:56:05 - a45774dc-4246-481a-8a11-f6bd4749c740 2025-06-02 13:56:05.224365 | orchestrator | 2025-06-02 13:56:05 - ac2bfc39-691d-40dc-9465-a35faab4ff3d 2025-06-02 13:56:05.439861 | orchestrator | 2025-06-02 13:56:05 - c71508c4-db66-4173-b425-5f9b06abaf01 2025-06-02 13:56:05.812383 | orchestrator | 2025-06-02 13:56:05 - clean up volumes 2025-06-02 13:56:05.945286 | orchestrator | 2025-06-02 13:56:05 - testbed-volume-2-node-base 2025-06-02 13:56:05.983358 | orchestrator | 2025-06-02 13:56:05 - testbed-volume-0-node-base 2025-06-02 13:56:06.021819 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-manager-base 2025-06-02 13:56:06.063834 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-1-node-base 2025-06-02 13:56:06.104746 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-3-node-base 2025-06-02 13:56:06.143742 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-5-node-base 2025-06-02 13:56:06.185587 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-4-node-base 2025-06-02 13:56:06.225226 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-1-node-4 2025-06-02 13:56:06.396919 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-8-node-5 2025-06-02 13:56:06.437314 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-3-node-3 2025-06-02 13:56:06.480104 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-4-node-4 2025-06-02 13:56:06.521706 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-6-node-3 2025-06-02 13:56:06.562241 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-5-node-5 2025-06-02 13:56:06.602532 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-0-node-3 2025-06-02 13:56:06.641693 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-7-node-4 2025-06-02 13:56:06.683811 | orchestrator | 2025-06-02 13:56:06 - testbed-volume-2-node-5 2025-06-02 13:56:06.722490 | orchestrator | 2025-06-02 13:56:06 - disconnect routers 2025-06-02 13:56:07.279752 | orchestrator | 2025-06-02 13:56:07 - testbed 2025-06-02 13:56:08.242678 | orchestrator | 2025-06-02 13:56:08 - clean up subnets 2025-06-02 13:56:08.283042 | orchestrator | 2025-06-02 13:56:08 - subnet-testbed-management 2025-06-02 13:56:08.451914 | orchestrator | 2025-06-02 13:56:08 - clean up networks 2025-06-02 13:56:09.149485 | orchestrator | 2025-06-02 13:56:09 - net-testbed-management 2025-06-02 13:56:09.445122 | orchestrator | 2025-06-02 13:56:09 - clean up security groups 2025-06-02 13:56:09.489923 | orchestrator | 2025-06-02 13:56:09 - testbed-management 2025-06-02 13:56:09.608499 | orchestrator | 2025-06-02 13:56:09 - testbed-node 2025-06-02 13:56:09.726807 | orchestrator | 2025-06-02 13:56:09 - clean up floating ips 2025-06-02 13:56:09.763790 | orchestrator | 2025-06-02 13:56:09 - 81.163.192.217 2025-06-02 13:56:10.100584 | orchestrator | 2025-06-02 13:56:10 - clean up routers 2025-06-02 13:56:10.205278 | orchestrator | 2025-06-02 13:56:10 - testbed 2025-06-02 13:56:11.587236 | orchestrator | ok: Runtime: 0:00:19.787330 2025-06-02 13:56:11.591606 | 2025-06-02 13:56:11.591825 | PLAY RECAP 2025-06-02 13:56:11.591973 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 13:56:11.592042 | 2025-06-02 13:56:11.728925 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 13:56:11.731440 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 13:56:12.486552 | 2025-06-02 13:56:12.486733 | PLAY [Cleanup play] 2025-06-02 13:56:12.502965 | 2025-06-02 13:56:12.503092 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 13:56:12.561129 | orchestrator | ok 2025-06-02 13:56:12.571428 | 2025-06-02 13:56:12.571568 | TASK [Set cloud fact (local deployment)] 2025-06-02 13:56:12.595723 | orchestrator | skipping: Conditional result was False 2025-06-02 13:56:12.608946 | 2025-06-02 13:56:12.609066 | TASK [Clean the cloud environment] 2025-06-02 13:56:13.752642 | orchestrator | 2025-06-02 13:56:13 - clean up servers 2025-06-02 13:56:14.228030 | orchestrator | 2025-06-02 13:56:14 - clean up keypairs 2025-06-02 13:56:14.243297 | orchestrator | 2025-06-02 13:56:14 - wait for servers to be gone 2025-06-02 13:56:14.284454 | orchestrator | 2025-06-02 13:56:14 - clean up ports 2025-06-02 13:56:14.377621 | orchestrator | 2025-06-02 13:56:14 - clean up volumes 2025-06-02 13:56:14.469505 | orchestrator | 2025-06-02 13:56:14 - disconnect routers 2025-06-02 13:56:14.499244 | orchestrator | 2025-06-02 13:56:14 - clean up subnets 2025-06-02 13:56:14.530558 | orchestrator | 2025-06-02 13:56:14 - clean up networks 2025-06-02 13:56:14.651083 | orchestrator | 2025-06-02 13:56:14 - clean up security groups 2025-06-02 13:56:14.687676 | orchestrator | 2025-06-02 13:56:14 - clean up floating ips 2025-06-02 13:56:14.712018 | orchestrator | 2025-06-02 13:56:14 - clean up routers 2025-06-02 13:56:15.149121 | orchestrator | ok: Runtime: 0:00:01.349943 2025-06-02 13:56:15.153162 | 2025-06-02 13:56:15.153374 | PLAY RECAP 2025-06-02 13:56:15.153521 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 13:56:15.153596 | 2025-06-02 13:56:15.299632 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 13:56:15.300683 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 13:56:16.117422 | 2025-06-02 13:56:16.117617 | PLAY [Base post-fetch] 2025-06-02 13:56:16.137037 | 2025-06-02 13:56:16.137232 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 13:56:16.203961 | orchestrator | skipping: Conditional result was False 2025-06-02 13:56:16.220532 | 2025-06-02 13:56:16.220883 | TASK [fetch-output : Set log path for single node] 2025-06-02 13:56:16.268215 | orchestrator | ok 2025-06-02 13:56:16.276924 | 2025-06-02 13:56:16.277067 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 13:56:16.766672 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/work/logs" 2025-06-02 13:56:17.052076 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/work/artifacts" 2025-06-02 13:56:17.326440 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/03e6f846fb7344cdb17eecf8934c4468/work/docs" 2025-06-02 13:56:17.350061 | 2025-06-02 13:56:17.350304 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 13:56:18.328126 | orchestrator | changed: .d..t...... ./ 2025-06-02 13:56:18.328411 | orchestrator | changed: All items complete 2025-06-02 13:56:18.328454 | 2025-06-02 13:56:19.055524 | orchestrator | changed: .d..t...... ./ 2025-06-02 13:56:19.824342 | orchestrator | changed: .d..t...... ./ 2025-06-02 13:56:19.861159 | 2025-06-02 13:56:19.861318 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 13:56:19.888479 | orchestrator | skipping: Conditional result was False 2025-06-02 13:56:19.891461 | orchestrator | skipping: Conditional result was False 2025-06-02 13:56:19.918363 | 2025-06-02 13:56:19.918495 | PLAY RECAP 2025-06-02 13:56:19.918589 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 13:56:19.918636 | 2025-06-02 13:56:20.047751 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 13:56:20.048795 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 13:56:20.808968 | 2025-06-02 13:56:20.809133 | PLAY [Base post] 2025-06-02 13:56:20.824428 | 2025-06-02 13:56:20.824594 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 13:56:21.818628 | orchestrator | changed 2025-06-02 13:56:21.828358 | 2025-06-02 13:56:21.828496 | PLAY RECAP 2025-06-02 13:56:21.828570 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 13:56:21.828646 | 2025-06-02 13:56:21.973427 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 13:56:21.975074 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 13:56:22.814936 | 2025-06-02 13:56:22.815153 | PLAY [Base post-logs] 2025-06-02 13:56:22.826407 | 2025-06-02 13:56:22.826552 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 13:56:23.288635 | localhost | changed 2025-06-02 13:56:23.298611 | 2025-06-02 13:56:23.298779 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 13:56:23.326805 | localhost | ok 2025-06-02 13:56:23.334111 | 2025-06-02 13:56:23.334265 | TASK [Set zuul-log-path fact] 2025-06-02 13:56:23.362663 | localhost | ok 2025-06-02 13:56:23.377896 | 2025-06-02 13:56:23.378045 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 13:56:23.415202 | localhost | ok 2025-06-02 13:56:23.421465 | 2025-06-02 13:56:23.421651 | TASK [upload-logs : Create log directories] 2025-06-02 13:56:23.928134 | localhost | changed 2025-06-02 13:56:23.930963 | 2025-06-02 13:56:23.931075 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 13:56:24.421964 | localhost -> localhost | ok: Runtime: 0:00:00.010884 2025-06-02 13:56:24.426598 | 2025-06-02 13:56:24.426748 | TASK [upload-logs : Upload logs to log server] 2025-06-02 13:56:24.977397 | localhost | Output suppressed because no_log was given 2025-06-02 13:56:24.982787 | 2025-06-02 13:56:24.982935 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 13:56:25.055274 | localhost | skipping: Conditional result was False 2025-06-02 13:56:25.060752 | localhost | skipping: Conditional result was False 2025-06-02 13:56:25.076482 | 2025-06-02 13:56:25.076627 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 13:56:25.133235 | localhost | skipping: Conditional result was False 2025-06-02 13:56:25.133537 | 2025-06-02 13:56:25.141740 | localhost | skipping: Conditional result was False 2025-06-02 13:56:25.154515 | 2025-06-02 13:56:25.154821 | LOOP [upload-logs : Upload console log and json output]